Re: [HACKERS] [sqlsmith] PANIC: failed to add BRIN tuple

2016-05-30 Thread Alvaro Herrera
Andreas Seltenreich wrote:
> Alvaro Herrera writes:
> 
> > If you can re-run sqlsmith and see if you can find different bugs, I'd
> > appreciate it.
> [...]
> > [2. text/x-diff; brincrash-2.patch]
> 
> BRIN is inconspicuous since applying this patch.  All coredumps I see
> now are either due to the parallel worker shutdown issue or acl.c's
> text/name confusion, both reported earlier.

Great.  Pushed patch, after some further cosmetic changes.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] PANIC: failed to add BRIN tuple

2016-05-29 Thread Andreas Seltenreich
Alvaro Herrera writes:

> If you can re-run sqlsmith and see if you can find different bugs, I'd
> appreciate it.
[...]
> [2. text/x-diff; brincrash-2.patch]

BRIN is inconspicuous since applying this patch.  All coredumps I see
now are either due to the parallel worker shutdown issue or acl.c's
text/name confusion, both reported earlier.

regards,
Andreas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] PANIC: failed to add BRIN tuple

2016-05-27 Thread Alvaro Herrera
Andreas Seltenreich wrote:
> I wrote:
> 
> > Re-fuzzing now with your patch applied.
> 
> This so far yielded three BRIN core dumps on different machines with the
> same backtraces.  Crash recovery fails in these cases.
> 
> I've put the data directory here (before recovery):
> 
> http://ansel.ydns.eu/~andreas/brincrash2-spidew.tar.xz (9.1M)
> 
> Corresponding backtraces of the backend and startup core files below.

Ah, of course.  These two crashes are two different bugs: the xlog one
is because I forgot to attach the new PageAddItem() flag to the
corresponding replay code; and the one in regular running is because I
neglected to have the patched PageAddItem() compute pd_lower correctly,
which in effect left pages as if empty.

I came up with a simple-minded test program (attached) which reproduces
the reported crashes in a couple of seconds; it can be improved further
for stronger BRIN testing, but at least with this I am confident that
the crashes at hand are gone.  If you can re-run sqlsmith and see if you
can find different bugs, I'd appreciate it.

I'm not going to push this just yet, in order to give others time to
comment on the new PageAddItemFlags API I'm adding.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
commit 6fa8e8ce3812dd6be2ae962ffe076a06482906c9
Author: Alvaro Herrera 
AuthorDate: Fri May 27 18:14:01 2016 -0400
CommitDate: Fri May 27 18:14:01 2016 -0400

Fix PageAddItem BRIN bug

BRIN was relying on the ability of removing a tuple from an index page,
then putting it back again later; but the PageAddItem interface doesn't
actually allow this to happen for tuples beyond the first free one past
the last used item.  To fix, create a new function PageAddItemFlags
which is like PageAddItem except that it has a flags bitmap; the
"overwrite" and "is_heap" boolean flags in the original become
PAI_OVERWITE and PAI_IS_HEAP flags, and a new flag
PAI_ALLOW_LARGE_OFFSET enables the behavior required by BRIN.
PageAddItem() remains with the original signature, for compatibility
with third-party modules (and other callers in core code are not
modified, either).

I also added a new error check in brinGetTupleForHeapBlock to raise an
error if an offset found in the revmap is not actually marked as live by
the page header, which would have detected this sooner (and, in any
case, react with a "corrupted BRIN index" error, rather than a hard
crash, suggesting a REINDEX.)

Reported by Andreas Seltenreich as detected by his handy sqlsmith
fuzzer.

diff --git a/src/backend/access/brin/brin_pageops.c b/src/backend/access/brin/brin_pageops.c
index d0ca485..4b1afce 100644
--- a/src/backend/access/brin/brin_pageops.c
+++ b/src/backend/access/brin/brin_pageops.c
@@ -179,8 +179,8 @@ brin_doupdate(Relation idxrel, BlockNumber pagesPerRange,
 
 		START_CRIT_SECTION();
 		PageIndexDeleteNoCompact(oldpage, , 1);
-		if (PageAddItem(oldpage, (Item) newtup, newsz, oldoff, true,
-		false) == InvalidOffsetNumber)
+		if (PageAddItemFlags(oldpage, (Item) newtup, newsz, oldoff,
+			 PAI_OVERWRITE | PAI_ALLOW_LARGE_OFFSET) == InvalidOffsetNumber)
 			elog(ERROR, "failed to add BRIN tuple");
 		MarkBufferDirty(oldbuf);
 
diff --git a/src/backend/access/brin/brin_revmap.c b/src/backend/access/brin/brin_revmap.c
index 812f76c..853181b 100644
--- a/src/backend/access/brin/brin_revmap.c
+++ b/src/backend/access/brin/brin_revmap.c
@@ -272,6 +272,10 @@ brinGetTupleForHeapBlock(BrinRevmap *revmap, BlockNumber heapBlk,
 		/* If we land on a revmap page, start over */
 		if (BRIN_IS_REGULAR_PAGE(page))
 		{
+			if (*off > PageGetMaxOffsetNumber(page))
+ereport(ERROR,
+		(errcode(ERRCODE_INDEX_CORRUPTED),
+		 errmsg_internal("corrupted BRIN index: inconsistent range map")));
 			lp = PageGetItemId(page, *off);
 			if (ItemIdIsUsed(lp))
 			{
diff --git a/src/backend/access/brin/brin_xlog.c b/src/backend/access/brin/brin_xlog.c
index deb7af4..3103775 100644
--- a/src/backend/access/brin/brin_xlog.c
+++ b/src/backend/access/brin/brin_xlog.c
@@ -193,7 +193,8 @@ brin_xlog_samepage_update(XLogReaderState *record)
 			elog(PANIC, "brin_xlog_samepage_update: invalid max offset number");
 
 		PageIndexDeleteNoCompact(page, , 1);
-		offnum = PageAddItem(page, (Item) brintuple, tuplen, offnum, true, false);
+		offnum = PageAddItemFlags(page, (Item) brintuple, tuplen, offnum,
+  PAI_OVERWRITE | PAI_ALLOW_LARGE_OFFSET);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "brin_xlog_samepage_update: failed to add tuple");
 
diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index 0189bc9..d227d4b 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -153,12 +153,12 @@ PageIsVerified(Page page, BlockNumber blkno)
 
 
 /*
- *	PageAddItem
+ *	

Re: [HACKERS] [sqlsmith] PANIC: failed to add BRIN tuple

2016-05-25 Thread Andreas Seltenreich
I wrote:

> Re-fuzzing now with your patch applied.

This so far yielded three BRIN core dumps on different machines with the
same backtraces.  Crash recovery fails in these cases.

I've put the data directory here (before recovery):

http://ansel.ydns.eu/~andreas/brincrash2-spidew.tar.xz (9.1M)

Corresponding backtraces of the backend and startup core files below.

regards,
Andreas

Core was generated by `postgres: smith brintest [local] UPDATE  
   '.
Program terminated with signal SIGABRT, Aborted.
#0  0x7f5a49a9c067 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
56  ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  0x7f5a49a9c067 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x7f5a49a9d448 in __GI_abort () at abort.c:89
#2  0x007ec979 in errfinish (dummy=dummy@entry=0) at elog.c:557
#3  0x007f012c in elog_finish (elevel=elevel@entry=20, 
fmt=fmt@entry=0x989cf0 "incorrect index offsets supplied") at elog.c:1378
#4  0x006eda2f in PageIndexDeleteNoCompact 
(page=page@entry=0x7f5a48c6f200 "Y", itemnos=itemnos@entry=0x7ffe6d0d0abc, 
nitems=nitems@entry=1) at bufpage.c:1011
#5  0x00470119 in brin_doupdate (idxrel=0x1c5a530, pagesPerRange=1, 
revmap=0x91a7d90, heapBlk=2166, oldbuf=3782, oldoff=2, origtup=0x719db80, 
origsz=3656, newtup=0x754e188, newsz=3656, samepage=1 '\001') at 
brin_pageops.c:181
#6  0x0046e5db in brininsert (idxRel=0x1c5a530, values=0x6591, 
nulls=0x6 , 
heaptid=0x, heapRel=0x7f5a4a72f700, 
checkUnique=UNIQUE_CHECK_NO) at brin.c:244
#7  0x005d888f in ExecInsertIndexTuples (slot=0x91a4870, 
tupleid=0x91a7df4, estate=0x91b9b38, noDupErr=0 '\000', specConflict=0x0, 
arbiterIndexes=0x0) at execIndexing.c:383
#8  0x005f74e5 in ExecUpdate (tupleid=0x7ffe6d0d0ed0, oldtuple=0x6591, 
slot=0x91a4870, planSlot=0x, epqstate=0x7f5a4a72f700, 
estate=0x91b9b38, canSetTag=1 '\001') at nodeModifyTable.c:1015
#9  0x005f7b7c in ExecModifyTable (node=0x71861c0) at 
nodeModifyTable.c:1501
#10 0x005dd5e8 in ExecProcNode (node=node@entry=0x71861c0) at 
execProcnode.c:396
#11 0x005d963f in ExecutePlan (dest=0x17bb870, direction=, numberTuples=0, sendTuples=, operation=CMD_UPDATE, 
use_parallel_mode=, planstate=0x71861c0, estate=0x91b9b38) at 
execMain.c:1567
#12 standard_ExecutorRun (queryDesc=0x17bb908, direction=, 
count=0) at execMain.c:338
#13 0x006f74d9 in ProcessQuery (plan=, 
sourceText=0x1670e18 "update public.brintest set \n  charcol = null, \n  
namecol = pg_catalog.name(cast(null as character varying)\n), \n  int8col = 
null, \n  int2col = public.brintest.int2col, \n  textcol = null, \n  float4col 
= null, \n  float8col = cast(coalesce(null,\n\npublic.brintest.float8col) 
as double precision), \n  inetcol = public.brintest.inetcol, \n  cidrcol = 
public.brintest.cidrcol, \n  bpcharcol = null, \n  datecol = (select datecol 
from public.brintest limit 1 offset 25)\n, \n  timecol = (select timecol from 
public.brintest limit 1 offset 42)\n, \n  timetzcol = (select timetzcol from 
public.brintest limit 1 offset 3)\n, \n  numericcol = (select numericcol from 
public.brintest limit 1 offset 32)\n, \n  uuidcol = 
public.brintest.uuidcol\nreturning \n  public.brintest.datecol as c0;", 
params=0x0, dest=0x17bb870, completionTag=0x7ffe6d0d10a0 "") at pquery.c:185
#14 0x006f776f in PortalRunMulti (portal=portal@entry=0x1611b68, 
isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0x17bb870, 
altdest=0xc96680 , 
completionTag=completionTag@entry=0x7ffe6d0d10a0 "") at pquery.c:1267
#15 0x006f7a1c in FillPortalStore (portal=portal@entry=0x1611b68, 
isTopLevel=isTopLevel@entry=1 '\001') at pquery.c:1044
#16 0x006f846d in PortalRun (portal=0x1611b68, 
count=9223372036854775807, isTopLevel=, dest=0x756c870, 
altdest=0x756c870, completionTag=0x7ffe6d0d1450 "") at pquery.c:782
#17 0x006f5c73 in exec_simple_query (query_string=) at 
postgres.c:1094
#18 PostgresMain (argc=23141224, argv=0x2cab518, dbname=0x15f3578 "brintest", 
username=0x2cab570 "\030\265\312\002") at postgres.c:4059
#19 0x0046c8d2 in BackendRun (port=0x1618880) at postmaster.c:4258
#20 BackendStartup (port=0x1618880) at postmaster.c:3932
#21 ServerLoop () at postmaster.c:1690
#22 0x006907fe in PostmasterMain (argc=argc@entry=4, 
argv=argv@entry=0x15f2560) at postmaster.c:1298
#23 0x0046d82d in main (argc=4, argv=0x15f2560) at main.c:228
(gdb) frame 4
#4  0x006eda2f in PageIndexDeleteNoCompact 
(page=page@entry=0x7f5a48c6f200 "Y", itemnos=itemnos@entry=0x7ffe6d0d0abc, 
nitems=nitems@entry=1) at bufpage.c:1011
1011elog(ERROR, "incorrect index offsets supplied");
(gdb) list
1006}
1007}
1008
1009/* this will catch invalid or out-of-order itemnos[] */
1010 

Re: [HACKERS] [sqlsmith] PANIC: failed to add BRIN tuple

2016-05-25 Thread Andreas Seltenreich
I wrote:
> Alvaro Herrera writes:
>> How long does it take for you to reproduce this panic in the unpatched
>> code?
>
> I could probably speed it up by creating lots of additional BRIN indexes
> in the regression database, and by compiling a sqlsmith that generates
> update statements only.

This actually worked.  Sqlsmith triggered the BRIN panic twice in 80e6
queries (vs. onece in 4e9 before).  I pushed the modified version on the
branch "modify-heavy".  Re-fuzzing now with your patch applied.

andreas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] PANIC: failed to add BRIN tuple

2016-05-24 Thread Andreas Seltenreich
Alvaro Herrera writes:

> How long does it take for you to reproduce this panic in the unpatched
> code?

Very long, I'm afraid.  I only observed it once, and according to the
logging database, about 4e9 random queries were generated since testing
with 9.5 code.

I could probably speed it up by creating lots of additional BRIN indexes
in the regression database, and by compiling a sqlsmith that generates
update statements only.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] PANIC: failed to add BRIN tuple

2016-05-24 Thread Alvaro Herrera
Andreas Seltenreich wrote:
> There was one instance of this PANIC when testing with the regression db
> of master at 50e5315.
> 
> ,
> | WARNING:  specified item offset is too large
> | PANIC:  failed to add BRIN tuple
> | server closed the connection unexpectedly
> `
> 
> It is reproducible with the query below on this instance only.  I've put
> the data directory (20MB) here:
> 
> http://ansel.ydns.eu/~andreas/brincrash.tar.xz
> 
> The instance was running on Debian Jessie amd64.  Query and Backtrace
> below.

How long does it take for you to reproduce this panic in the unpatched
code?  This patch fixes the immediate problem, but it'd be good if the
bug is fixed more generally.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
commit 7ecb10d515980f592a3bd7fb27f883af8404d762
Author: Alvaro Herrera 
AuthorDate: Tue May 24 19:09:37 2016 -0400
CommitDate: Tue May 24 19:10:01 2016 -0400

fix PageAddItem BRIN bug

diff --git a/src/backend/access/brin/brin_pageops.c b/src/backend/access/brin/brin_pageops.c
index d0ca485..a49eccd 100644
--- a/src/backend/access/brin/brin_pageops.c
+++ b/src/backend/access/brin/brin_pageops.c
@@ -179,8 +179,9 @@ brin_doupdate(Relation idxrel, BlockNumber pagesPerRange,
 
 		START_CRIT_SECTION();
 		PageIndexDeleteNoCompact(oldpage, , 1);
-		if (PageAddItem(oldpage, (Item) newtup, newsz, oldoff, true,
-		false) == InvalidOffsetNumber)
+		if (PageAddItemFlags(oldpage, (Item) newtup, newsz, oldoff,
+			 PAI_OVERWRITE | PAI_ALLOW_LARGE_OFFSET) ==
+			InvalidOffsetNumber)
 			elog(ERROR, "failed to add BRIN tuple");
 		MarkBufferDirty(oldbuf);
 
diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index 0189bc9..cc7a4d8 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -153,12 +153,12 @@ PageIsVerified(Page page, BlockNumber blkno)
 
 
 /*
- *	PageAddItem
+ *	PageAddItemFlags
  *
  *	Add an item to a page.  Return value is offset at which it was
  *	inserted, or InvalidOffsetNumber if there's not room to insert.
  *
- *	If overwrite is true, we just store the item at the specified
+ *	If flag PAI_OVERWRITE is set, we just store the item at the specified
  *	offsetNumber (which must be either a currently-unused item pointer,
  *	or one past the last existing item).  Otherwise,
  *	if offsetNumber is valid and <= current max offset in the page,
@@ -167,18 +167,20 @@ PageIsVerified(Page page, BlockNumber blkno)
  *	If offsetNumber is not valid, then assign one by finding the first
  *	one that is both unused and deallocated.
  *
- *	If is_heap is true, we enforce that there can't be more than
+ *	If flag PAI_IS_HEAP is set, we enforce that there can't be more than
  *	MaxHeapTuplesPerPage line pointers on the page.
  *
+ *	If flag PAI_ALLOW_LARGE_OFFSET is not set, we disallow placing items
+ *	beyond one past the last existing item.
+ *
  *	!!! EREPORT(ERROR) IS DISALLOWED HERE !!!
  */
 OffsetNumber
-PageAddItem(Page page,
-			Item item,
-			Size size,
-			OffsetNumber offsetNumber,
-			bool overwrite,
-			bool is_heap)
+PageAddItemFlags(Page page,
+ Item item,
+ Size size,
+ OffsetNumber offsetNumber,
+ int flags)
 {
 	PageHeader	phdr = (PageHeader) page;
 	Size		alignedSize;
@@ -209,7 +211,7 @@ PageAddItem(Page page,
 	if (OffsetNumberIsValid(offsetNumber))
 	{
 		/* yes, check it */
-		if (overwrite)
+		if ((flags & PAI_OVERWRITE) != 0)
 		{
 			if (offsetNumber < limit)
 			{
@@ -257,13 +259,18 @@ PageAddItem(Page page,
 		}
 	}
 
-	if (offsetNumber > limit)
+	/*
+	 * Reject placing items beyond the first unused line pointer, unless
+	 * caller explicitely asked for that.
+	 */
+	if (((flags & PAI_ALLOW_LARGE_OFFSET) == 0) && offsetNumber > limit)
 	{
 		elog(WARNING, "specified item offset is too large");
 		return InvalidOffsetNumber;
 	}
 
-	if (is_heap && offsetNumber > MaxHeapTuplesPerPage)
+	/* Reject placing items beyond heap boundary, if heap */
+	if (((flags & PAI_IS_HEAP) != 0) && offsetNumber > MaxHeapTuplesPerPage)
 	{
 		elog(WARNING, "can't put more than MaxHeapTuplesPerPage items in a heap page");
 		return InvalidOffsetNumber;
@@ -324,6 +331,21 @@ PageAddItem(Page page,
 }
 
 /*
+ * PageAddItemFlags shim, for compatibility.
+ */
+OffsetNumber
+PageAddItem(Page page, Item item, Size size, OffsetNumber offsetNumber,
+			bool overwrite, bool is_heap)
+{
+	int		flags = 0;
+
+	flags |= overwrite ? PAI_OVERWRITE : 0;
+	flags |= is_heap ? PAI_IS_HEAP : 0;
+
+	return PageAddItemFlags(page, item, size, offsetNumber, flags);
+}
+
+/*
  * PageGetTempPage
  *		Get a temporary page in local memory for special processing.
  *		The returned page is not initialized at all; caller must do that.
diff --git a/src/include/storage/bufpage.h b/src/include/storage/bufpage.h
index 63053d4..5ca1db7 100644
--- a/src/include/storage/bufpage.h
+++ 

Re: [HACKERS] [sqlsmith] PANIC: failed to add BRIN tuple

2016-05-23 Thread Alvaro Herrera
Andreas Seltenreich wrote:
> There was one instance of this PANIC when testing with the regression db
> of master at 50e5315.
> 
> ,
> | WARNING:  specified item offset is too large
> | PANIC:  failed to add BRIN tuple
> | server closed the connection unexpectedly
> `
> 
> It is reproducible with the query below on this instance only.

Hm, so this is an over-eager check.  As I understand, what seems to have
happened is that the page was filled with up to 10 tuples, then tuples
0-8 were removed, probably moved to other pages.  When this update runs,
it needs to update the remaining tuple, which is a normal thing for BRIN
to do.  But BRIN doesn't want the item number to change, because it's
referenced from the "revmap"; so it removes the item and then wants to
insert it again.  But bufpage.c is not accustomed to having callers want
to put items beyond what's the last currently used item plus one, so it
raises a warning and returns without doing the insert.  This drives BRIN
crazy.

I tried simply removing the "return InvalidOffsetNumber" line from that
block (so that it still throws the warning but it does execute the index
insert), and everything seems to behave correctly.  I suppose a simple
fix would be to add a flag to PageAddItem() and skip this block in that
case, but that would break the ABI in 9.5.  I'm not real sure what's a
good fix yet.  Maybe a new routine specific to the needs of BRIN is
called for.

I would also like to come up with a way to have this scenario be tested
by a new regression test.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] PANIC: failed to add BRIN tuple

2016-05-22 Thread Alvaro Herrera
Andreas Seltenreich wrote:
> There was one instance of this PANIC when testing with the regression db
> of master at 50e5315.
> 
> ,
> | WARNING:  specified item offset is too large
> | PANIC:  failed to add BRIN tuple
> | server closed the connection unexpectedly
> `
> 
> It is reproducible with the query below on this instance only.  I've put
> the data directory (20MB) here:
> 
> http://ansel.ydns.eu/~andreas/brincrash.tar.xz

Thanks for all the details.  I'll be looking into this tomorrow.


-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers