Re: [HACKERS] Re: PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On 08.03.2011 04:07, Greg Stark wrote: Well from that log you definitely have OldestXmin going backwards. And not by a little bit either. at 6:33 it set the all_visible flag and then at 7:01 it was almost 1.3 million transactions earlier. In fact to precisely the same value that was in use for a transaction at 1:38. That seems like a bit of a coincidence though it's not repeated earlier. Yep. After staring at GetOldestXmin() again, it finally struck me how OldestXmin can move backwards. You need two databases for it, which probably explains why this has been so elusive. Here's how to reproduce that: CREATE DATABASE foodb; CREATE DATABASE bardb; session 1, in foodb: foodb=# begin isolation level serializable; BEGIN foodb=# CREATE TABLE foo (a int4); -- just something to force this xact to have an xid CREATE TABLE foodb=# (leave the transaction open) session 2, in bardb: bardb=# CREATE TABLE foo AS SELECT 1; SELECT bardb=# vacuum foo; -- to set the PD_ALL_VISIBLE flag VACUUM session 3, in bardb: bardb=# begin isolation level serializable; BEGIN bardb=# SELECT 1; ?column? -- 1 (1 row) (leave transaction open) session 2, in bardb: bardb=# vacuum foo; WARNING: PD_ALL_VISIBLE flag was incorrectly set in relation foo page 0 (OldestXmin 803) VACUUM bardb=# What there are no other transactions active in the same database, GetOldestXmin() returns just latestCompletedXid. When you open a transaction in the same database after that, its xid will be above latestCompletedXid, but its xmin includes transactions from all databases, and there might be a transaction in some other database with an xid that precedes the value that GetOldestXmin() returned earlier. I'm not sure what to do about that. One idea is track two xmin values in proc-array, one that includes transactions in all databases, and another that only includes transactions in the same database. GetOldestXmin() (when allDbs is false) would only pay attention to the latter. It would add a few instructions to GetSnapshotData(), though. Another idea is to give up on the warning when it appears that oldestxmin has moved backwards, and assume that it's actually fine. We could still warn in other cases where the flag appears to be incorrectly set, like if there is a deleted tuple on the page. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On 08.03.2011 10:00, Heikki Linnakangas wrote: Another idea is to give up on the warning when it appears that oldestxmin has moved backwards, and assume that it's actually fine. We could still warn in other cases where the flag appears to be incorrectly set, like if there is a deleted tuple on the page. This is probably a better idea at least in back-branches. It also handles the case of twiddling vacuum_defer_cleanup_age, which tracking two xmins per transactions would not handle. Here's a patch. I also changed the warning per Robert's suggestion. Anyone see a hole in this? -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c index 67e0be9..dbd2e8d 100644 --- a/src/backend/commands/vacuumlazy.c +++ b/src/backend/commands/vacuumlazy.c @@ -332,6 +332,7 @@ lazy_scan_heap(Relation onerel, LVRelStats *vacrelstats, Size freespace; bool all_visible_according_to_vm = false; bool all_visible; + bool has_dead_tuples; /* * Skip pages that don't require vacuuming according to the visibility @@ -475,6 +476,7 @@ lazy_scan_heap(Relation onerel, LVRelStats *vacrelstats, * requiring freezing. */ all_visible = true; + has_dead_tuples = false; nfrozen = 0; hastup = false; prev_dead_count = vacrelstats-num_dead_tuples; @@ -613,6 +615,7 @@ lazy_scan_heap(Relation onerel, LVRelStats *vacrelstats, { lazy_record_dead_tuple(vacrelstats, (tuple.t_self)); tups_vacuumed += 1; +has_dead_tuples = true; } else { @@ -671,9 +674,22 @@ lazy_scan_heap(Relation onerel, LVRelStats *vacrelstats, PageSetAllVisible(page); SetBufferCommitInfoNeedsSave(buf); } - else if (PageIsAllVisible(page) !all_visible) + /* + * It's possible for the value returned by GetOldestXmin() to move + * backwards, so it's not wrong for us to see tuples that appear to + * not be visible to everyone yet, while PD_ALL_VISIBLE is already + * set. The real safe xmin value never moves backwards, but + * GetOldestXmin() is conservative and sometimes returns a value + * that's unnecessarily small, so if we see that contradiction it + * just means that the tuples that we think are not visible to + * everyone yet actually are, and the PD_ALL_VISIBLE flag is correct. + * + * There shouldn't be any dead tuples on a page with PD_ALL_VISIBLE + * set, however. + */ + else if (PageIsAllVisible(page) has_dead_tuples) { - elog(WARNING, PD_ALL_VISIBLE flag was incorrectly set in relation \%s\ page %u, + elog(WARNING, page containing dead tuples is marked as all-visible in relation \%s\ page %u, relname, blkno); PageClearAllVisible(page); SetBufferCommitInfoNeedsSave(buf); -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On Tue, Mar 08, 2011 at 10:00:01AM +0200, Heikki Linnakangas wrote: On 08.03.2011 04:07, Greg Stark wrote: Well from that log you definitely have OldestXmin going backwards. And not by a little bit either. at 6:33 it set the all_visible flag and then at 7:01 it was almost 1.3 million transactions earlier. In fact to precisely the same value that was in use for a transaction at 1:38. That seems like a bit of a coincidence though it's not repeated earlier. Yep. After staring at GetOldestXmin() again, it finally struck me how OldestXmin can move backwards. You need two databases for it, which probably explains why this has been so elusive. ... What there are no other transactions active in the same database, GetOldestXmin() returns just latestCompletedXid. When you open a transaction in the same database after that, its xid will be above latestCompletedXid, but its xmin includes transactions from all databases, and there might be a transaction in some other database with an xid that precedes the value that GetOldestXmin() returned earlier. I'm not sure what to do about that. One idea is track two xmin values in proc-array, one that includes transactions in all databases, and another that only includes transactions in the same database. GetOldestXmin() (when allDbs is false) would only pay attention to the latter. It would add a few instructions to GetSnapshotData(), though. Another idea is to give up on the warning when it appears that oldestxmin has moved backwards, and assume that it's actually fine. We could still warn in other cases where the flag appears to be incorrectly set, like if there is a deleted tuple on the page. I read this to mean that it is safe to ignore this warning and that these databases are not at risk for data corruption or wrong results so long as the warning is due to oldestxmin. Please correct me if I have misunderstood. Thanks -dg -- David Gould da...@sonic.net 510 536 1443510 282 0869 If simplicity worked, the world would be overrun with insects. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On Tue, Mar 08, 2011 at 10:37:24AM +0200, Heikki Linnakangas wrote: On 08.03.2011 10:00, Heikki Linnakangas wrote: Another idea is to give up on the warning when it appears that oldestxmin has moved backwards, and assume that it's actually fine. We could still warn in other cases where the flag appears to be incorrectly set, like if there is a deleted tuple on the page. This is probably a better idea at least in back-branches. It also handles the case of twiddling vacuum_defer_cleanup_age, which tracking two xmins per transactions would not handle. Here's a patch. I also changed the warning per Robert's suggestion. Anyone see a hole in this? It would be helpful to have the dbname and schema in the message in addition to the relname. I added those to the original diagnostic patch as it was not clear that the messages were all related to the same page/table/dg. Also, in your comment you might mention that multiple databases are one way we could see oldestxmin move backwards. -dg -- David Gould da...@sonic.net 510 536 1443510 282 0869 If simplicity worked, the world would be overrun with insects. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On 08.03.2011 10:38, daveg wrote: I read this to mean that it is safe to ignore this warning and that these databases are not at risk for data corruption or wrong results so long as the warning is due to oldestxmin. Please correct me if I have misunderstood. Yes, that's correct. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On 08.03.2011 10:49, daveg wrote: On Tue, Mar 08, 2011 at 10:37:24AM +0200, Heikki Linnakangas wrote: On 08.03.2011 10:00, Heikki Linnakangas wrote: Another idea is to give up on the warning when it appears that oldestxmin has moved backwards, and assume that it's actually fine. We could still warn in other cases where the flag appears to be incorrectly set, like if there is a deleted tuple on the page. This is probably a better idea at least in back-branches. It also handles the case of twiddling vacuum_defer_cleanup_age, which tracking two xmins per transactions would not handle. Here's a patch. I also changed the warning per Robert's suggestion. Anyone see a hole in this? It would be helpful to have the dbname and schema in the message in addition to the relname. I added those to the original diagnostic patch as it was not clear that the messages were all related to the same page/table/dg. Hmm, we don't usually include database name and schema in messages like this. There's a couple of other warnings in vacuum, too, that only print the table name. I have to admit that the database name was crucial in tracking this down, but in hindsight you could've added database name to log_line_prefix for the same effect. If you have several databases with same schema, that's a good idea anyway. So in the end, I decided not to include it. Also, in your comment you might mention that multiple databases are one way we could see oldestxmin move backwards. Ok, I added a comment to GetOldestXmin() explaining that. Committed. Thanks David for your help in debugging this! And thanks to everyone else for helping out too. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On Fri, Mar 04, 2011 at 05:52:29PM +0200, Heikki Linnakangas wrote: Hmm, if these all came from the same database, then it looks OldestXmin has moved backwards. That would explain the warnings. First one vacuum determines that all the tuples are visible to everyone and sets the flag. Then another vacuum runs with an older OldestXmin, and determines that there's a tuple on the page with an xmin that is not yet visible to everyone, hence it thinks that the flag should not have been set yet. Looking at the code, I don't see how that situation could arise, though. The value calculated by GetOldestXmin() should never move backwards. And GetOldestXmin() is called in lazy_vacuum_rel(), after it has acquired a lock on the table, which should protect from a race condition where two vacuums could run on the table one after another, in a way where the later vacuum runs with an OldestXmin calculated before the first vacuum. Hmm, fiddling with vacuum_defer_cleanup_age on the fly could cause that, though. You don't do that, do you? No. I've updated the patch to collect db and schema and added Merlins patch as well and run it for a while. The attached log is all the debug messages for pg_statistic page 333 from one database. I've also attached the two most recent page images for that particular page, the last digits in the filename are the hour and minute of when the page was saved. What else can I be doing to help figure this out? Thanks -dg -- David Gould da...@sonic.net 510 536 1443510 282 0869 If simplicity worked, the world would be overrun with insects. 2011-03-05 00:50:03.238 PST 13304 WARNING: debugging: setting PD_ALL_VISIBLE in relation c57.pg_catalog.pg_statistic on page 333 (OldestXmin 337186202) 2011-03-05 21:55:00.004 PST 10702 WARNING: debugging: setting PD_ALL_VISIBLE in relation c57.pg_catalog.pg_statistic on page 333 (OldestXmin 341342282) 2011-03-05 23:28:06.368 PST 16660 WARNING: debugging: setting PD_ALL_VISIBLE in relation c57.pg_catalog.pg_statistic on page 333 (OldestXmin 342000646) 2011-03-06 05:24:03.461 PST 12850 WARNING: debugging: setting PD_ALL_VISIBLE in relation c57.pg_catalog.pg_statistic on page 333 (OldestXmin 343420345) 2011-03-06 06:23:40.174 PST 21650 WARNING: debugging: setting PD_ALL_VISIBLE in relation c57.pg_catalog.pg_statistic on page 333 (OldestXmin 343773970) 2011-03-06 07:50:49.921 PST 3847 WARNING: debugging: setting PD_ALL_VISIBLE in relation c57.pg_catalog.pg_statistic on page 333 (OldestXmin 344373260) 2011-03-06 09:11:35.662 PST 10820 WARNING: debugging: setting PD_ALL_VISIBLE in relation c57.pg_catalog.pg_statistic on page 333 (OldestXmin 344660206) 2011-03-07 01:23:57.124 PST 14121 WARNING: debugging: setting PD_ALL_VISIBLE in relation c57.pg_catalog.pg_statistic on page 333 (OldestXmin 347873961) 2011-03-07 01:25:01.129 PST 14266 WARNING: PD_ALL_VISIBLE flag was incorrectly set in relation c57.pg_catalog.pg_statistic page 333 via OldestXmin (OldestXmin 347186993) 2011-03-07 01:26:01.143 PST 14356 WARNING: debugging: setting PD_ALL_VISIBLE in relation c57.pg_catalog.pg_statistic on page 333 (OldestXmin 347879220) 2011-03-07 01:27:01.158 PST 14456 WARNING: PD_ALL_VISIBLE flag was incorrectly set in relation c57.pg_catalog.pg_statistic page 333 via OldestXmin (OldestXmin 347186993) 2011-03-07 01:33:05.353 PST 15108 WARNING: debugging: setting PD_ALL_VISIBLE in relation c57.pg_catalog.pg_statistic on page 333 (OldestXmin 347705144) 2011-03-07 01:38:09.298 PST 15869 WARNING: debugging: setting PD_ALL_VISIBLE in relation c57.pg_catalog.pg_statistic on page 333 (OldestXmin 347705144) 2011-03-07 01:51:17.622 PST 18829 WARNING: debugging: setting PD_ALL_VISIBLE in relation c57.pg_catalog.pg_statistic on page 333 (OldestXmin 348042728) 2011-03-07 02:24:41.994 PST 22594 WARNING: debugging: setting PD_ALL_VISIBLE in relation c57.pg_catalog.pg_statistic on page 333 (OldestXmin 348116608) 2011-03-07 06:33:47.460 PST 18384 WARNING: debugging: setting PD_ALL_VISIBLE in relation c57.pg_catalog.pg_statistic on page 333 (OldestXmin 349004767) 2011-03-07 07:01:51.400 PST 23896 WARNING: PD_ALL_VISIBLE flag was incorrectly set in relation c57.pg_catalog.pg_statistic page 333 via OldestXmin (OldestXmin 347705144) 2011-03-07 10:23:31.416 PST 10654 WARNING: debugging: setting PD_ALL_VISIBLE in relation c57.pg_catalog.pg_statistic on page 333 (OldestXmin 349660381) pageimage_c57_pg_catalog_pg_statistic_333.0127 Description: Binary data pageimage_c57_pg_catalog_pg_statistic_333.0701 Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On Thu, Mar 03, 2011 at 09:04:04AM -0600, Merlin Moncure wrote: On Thu, Mar 3, 2011 at 2:16 AM, Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote: On 03.03.2011 09:12, daveg wrote: Question: what would be the consequence of simply patching out the setting of this flag? Assuming that the incorrect PD_ALL_VISIBLE flag is the only problem (big assumption perhaps) then simply never setting it would at least avoid the possibility of returning wrong answers, presumably at some performance cost. We possibly could live with that until we get a handle on the real cause and fix. Yes. With that assumption. If you really want to do that, I would suggest the attached patch instead. This just disables the optimization in seqscans to trust it, so an incorrectly set flag won't affect correctness of query results, but the flag is still set as usual and you still get the warnings so that we can continue to debug the issue. This. The mis-set flag can is likely a bug/concurrency issue etc, but could also be a symptom of more sinister data corruption. I did various vacuum experiments all day yesterday on my windows workstation and was not able to produce any mis-flags. I trust iscsi more than nfs, but maybe there is a connection here that is hardware based. hm. do you think it would be helpful to know what is causing the all_visible flag to get flipped? If so, the attached patch shows which case is throwing it... I'll apply your patch and try it. Probably can only do it for a few minutes tomorrow evening though as the output is huge and we have only limited down time availability. -dg -- David Gould da...@sonic.net 510 536 1443510 282 0869 If simplicity worked, the world would be overrun with insects. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On 04.03.2011 11:00, daveg wrote: Thanks, I've applied both patches to one host. I'll probably have to back down on the debugging logging soon, as the output is pretty voluminious, it is producing 100MB of message log every few minutes. I'll try Merlins patch to get the case setting the bit first though. Anyway, here is a snippit of log with the setting and unsetting of the bit on one page. Unfortunately, we don't know which of a dozen odd databases the page belongs to, but the timestamps are so close it seems likely to be the same one. I've added dbname to the patch and will get that next time I can switch binaries. 2011-03-03 23:28:34.170 PST 2039 WARNING: debugging: setting PD_ALL_VISIBLE in relation pg_statistic on page 5963 (OldestXmin 331848998) ... /cv/logs/production_03-20110303_232519.log.gz:2011-03-03 23:29:34.194 PST 2115 WARNING: PD_ALL_VISIBLE flag was incorrectly set in relation pg_statistic page 5963 (OldestXmin 331677178) ... 2011-03-03 23:42:38.323 PST 2808 WARNING: debugging: setting PD_ALL_VISIBLE in relation pg_attribute on page 5963 (OldestXmin 331677178) Hmm, if these all came from the same database, then it looks OldestXmin has moved backwards. That would explain the warnings. First one vacuum determines that all the tuples are visible to everyone and sets the flag. Then another vacuum runs with an older OldestXmin, and determines that there's a tuple on the page with an xmin that is not yet visible to everyone, hence it thinks that the flag should not have been set yet. Looking at the code, I don't see how that situation could arise, though. The value calculated by GetOldestXmin() should never move backwards. And GetOldestXmin() is called in lazy_vacuum_rel(), after it has acquired a lock on the table, which should protect from a race condition where two vacuums could run on the table one after another, in a way where the later vacuum runs with an OldestXmin calculated before the first vacuum. Hmm, fiddling with vacuum_defer_cleanup_age on the fly could cause that, though. You don't do that, do you? Also, I've attached the relevent page image. Thanks. There seems to be two tuples on the page, both of were HOT updated at some point, but now there's only one version of each left: postgres=# SELECT lp, lp_off, lp_flags, lp_len, t_xmin, t_xmax, t_ctid, t_infomask, t_infomask2, t_hoff FROM heap_page_items(loread(lo_open(29924, 262144), 8192)) WHERE lp_flags 0; lp | lp_off | lp_flags | lp_len | t_xmin | t_xmax | t_ctid | t_infomask | t_infomask2 | t_hoff ++--++---++--++-+ 1 | 7608 |1 |580 | 331250141 | 0 | (5963,1) | 10499 | -32747 | 32 3 | 1 |2 | 0 | || | | | 4 | 7528 |1 | 76 | 331735553 | 0 | (5963,4) | 10497 | -32747 | 32 19 | 4 |2 | 0 | || | | | (4 rows) Deciphering those infomasks, the first tuple at lp 1 has these flags set: HEAP_UPDATED | HEAP_XMAX_INVALID | HEAP_XMIN_COMMITTED | HEAP_HASNULL | HEAP_HASVARWIDTH And the 2nd one at lp 4: HEAP_UPDATED | HEAP_XMAX_INVALID | HEAP_XMIN_COMMITTED | HEAP_HASNULL So, both of those tuples are live. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On 03.03.2011 09:12, daveg wrote: Question: what would be the consequence of simply patching out the setting of this flag? Assuming that the incorrect PD_ALL_VISIBLE flag is the only problem (big assumption perhaps) then simply never setting it would at least avoid the possibility of returning wrong answers, presumably at some performance cost. We possibly could live with that until we get a handle on the real cause and fix. Yes. With that assumption. If you really want to do that, I would suggest the attached patch instead. This just disables the optimization in seqscans to trust it, so an incorrectly set flag won't affect correctness of query results, but the flag is still set as usual and you still get the warnings so that we can continue to debug the issue. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index 7dcc601..d53aede 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -255,6 +255,11 @@ heapgetpage(HeapScanDesc scan, BlockNumber page) * transaction in the standby. */ all_visible = PageIsAllVisible(dp) !snapshot-takenDuringRecovery; + /* + * XXX: there seems to be something wrong with the way the flag is set, + * so don't trust it. Remove this when the cause is found. + */ + all_visible = false; for (lineoff = FirstOffsetNumber, lpp = PageGetItemId(dp, lineoff); lineoff = lines; -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On Thu, Mar 03, 2011 at 10:16:29AM +0200, Heikki Linnakangas wrote: On 03.03.2011 09:12, daveg wrote: Question: what would be the consequence of simply patching out the setting of this flag? Assuming that the incorrect PD_ALL_VISIBLE flag is the only problem (big assumption perhaps) then simply never setting it would at least avoid the possibility of returning wrong answers, presumably at some performance cost. We possibly could live with that until we get a handle on the real cause and fix. Yes. With that assumption. If you really want to do that, I would suggest the attached patch instead. This just disables the optimization in seqscans to trust it, so an incorrectly set flag won't affect correctness of query results, but the flag is still set as usual and you still get the warnings so that we can continue to debug the issue. Thanks. I'll be applying this tomorrow and will send you some page images to look at assuming it still does it. I had a look at how this gets set and cleared and did not see anything obvious so I'm pretty mystified. Also, we are seeing thousands of these daily for at least a month on 4 large hosts and no-one has noticed any other issues, which suprises me. Very strange. -dg -- David Gould da...@sonic.net 510 536 1443510 282 0869 If simplicity worked, the world would be overrun with insects. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On Thu, Mar 3, 2011 at 2:16 AM, Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote: On 03.03.2011 09:12, daveg wrote: Question: what would be the consequence of simply patching out the setting of this flag? Assuming that the incorrect PD_ALL_VISIBLE flag is the only problem (big assumption perhaps) then simply never setting it would at least avoid the possibility of returning wrong answers, presumably at some performance cost. We possibly could live with that until we get a handle on the real cause and fix. Yes. With that assumption. If you really want to do that, I would suggest the attached patch instead. This just disables the optimization in seqscans to trust it, so an incorrectly set flag won't affect correctness of query results, but the flag is still set as usual and you still get the warnings so that we can continue to debug the issue. This. The mis-set flag can is likely a bug/concurrency issue etc, but could also be a symptom of more sinister data corruption. I did various vacuum experiments all day yesterday on my windows workstation and was not able to produce any mis-flags. I trust iscsi more than nfs, but maybe there is a connection here that is hardware based. hm. do you think it would be helpful to know what is causing the all_visible flag to get flipped? If so, the attached patch shows which case is throwing it... merlin visible_debug.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On Tue, Mar 01, 2011 at 08:40:37AM -0500, Robert Haas wrote: On Mon, Feb 28, 2011 at 10:32 PM, Greg Stark gsst...@mit.edu wrote: On Tue, Mar 1, 2011 at 1:43 AM, David Christensen da...@endpoint.com wrote: Was this cluster upgraded to 8.4.4 from 8.4.0? It sounds to me like a known bug in 8.4.0 which was fixed by this commit: The reproduction script described was running vacuum repeatedly. A single vacuum run out to be sufficient to clean up the problem if it was left-over. I wonder if it would help to write a regression test that runs 100 or so vacuums and see if the bulid farm turns up any examples of this behaviour. One other thing to keep in mind here is that the warning message we've chosen can be a bit misleading. The warning is: WARNING: PD_ALL_VISIBLE flag was incorrectly set in relation test page 1 ...which implies that the state of the tuples is correct, and that the page-level bit is wrong in comparison. But I recently saw a case where the infomask got clobbered, resulting in this warning. The page level bit was correct, at least relative to the intended page contents; it was the a tuple on the page that was screwed up. It might have been better to pick a more neutral phrasing, like page is marked all-visible but some tuples are not visible. Yeesh. Yikes. I hope that this is not the case as we are seeing thousands of these daily on each of 4 large production hosts. Mostly on catalogs, especially pg_statistic. However it does occur on some high delete/insert traffic user tables too. Question: what would be the consequence of simply patching out the setting of this flag? Assuming that the incorrect PD_ALL_VISIBLE flag is the only problem (big assumption perhaps) then simply never setting it would at least avoid the possibility of returning wrong answers, presumably at some performance cost. We possibly could live with that until we get a handle on the real cause and fix. I had a look and don't really see anything except vacuum_lazy that sets it, so it seems simple to disable. Or have I understood this incorrectly? Anything else I can be doing to try to track this down? -dg -- David Gould da...@sonic.net 510 536 1443510 282 0869 If simplicity worked, the world would be overrun with insects. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On Mon, Feb 28, 2011 at 10:32 PM, Greg Stark gsst...@mit.edu wrote: On Tue, Mar 1, 2011 at 1:43 AM, David Christensen da...@endpoint.com wrote: Was this cluster upgraded to 8.4.4 from 8.4.0? It sounds to me like a known bug in 8.4.0 which was fixed by this commit: The reproduction script described was running vacuum repeatedly. A single vacuum run out to be sufficient to clean up the problem if it was left-over. I wonder if it would help to write a regression test that runs 100 or so vacuums and see if the bulid farm turns up any examples of this behaviour. One other thing to keep in mind here is that the warning message we've chosen can be a bit misleading. The warning is: WARNING: PD_ALL_VISIBLE flag was incorrectly set in relation test page 1 ...which implies that the state of the tuples is correct, and that the page-level bit is wrong in comparison. But I recently saw a case where the infomask got clobbered, resulting in this warning. The page level bit was correct, at least relative to the intended page contents; it was the a tuple on the page that was screwed up. It might have been better to pick a more neutral phrasing, like page is marked all-visible but some tuples are not visible. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: PD_ALL_VISIBLE flag was incorrectly set happend during repeatable vacuum
On Tue, Mar 1, 2011 at 1:43 AM, David Christensen da...@endpoint.com wrote: Was this cluster upgraded to 8.4.4 from 8.4.0? It sounds to me like a known bug in 8.4.0 which was fixed by this commit: The reproduction script described was running vacuum repeatedly. A single vacuum run out to be sufficient to clean up the problem if it was left-over. I wonder if it would help to write a regression test that runs 100 or so vacuums and see if the bulid farm turns up any examples of this behaviour. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers