Re: [HACKERS] [PERFORM] In progress INSERT wrecks plans on table

2013-07-13 Thread Abhijit Menon-Sen
At 2013-07-12 16:25:14 -0700, jeff.ja...@gmail.com wrote:

 I think the reviewer of a performance patch should do some independent
 testing of the performance, to replicate the author's numbers; and
 hopefully with a few different scenarios.

You're quite right. I apologise for being lazy; doubly so because I
can't actually see any difference while running the test case with
the patches applied.

unpatched:
before: 1629.831391, 1559.793758, 1498.765018, 1639.384038
during:   37.434492,   37.044989,   37.112422,   36.950895
after :   46.591688,   46.341256,   46.042169,   46.260684

patched:
before: 1813.091975, 1798.923524, 1629.301356, 1606.849033
during:   37.344987,   37.207359,   37.406788,   37.316925
after :   46.657747,   46.537420,   46.746377,   46.577052

(before is before starting session 2; during is after session 2
inserts, but before it commits; after is after session 2 issues a
rollback.)

The timings above are with both xid_in_snapshot_cache.v1.patch and
cache_TransactionIdInProgress.v2.patch applied, but the numbers are
not noticeably different with only the first patch applied. After I
vacuum plan, the timings in both cases return to normal.

In a quick test with gdb (and also in perf report output), I didn't see
the following block in procarray.c being entered at all:

+   if (max_prepared_xacts == 0  pgprocno = 0 
+   (TransactionIdEquals(xid, pxid) || TransactionIdEquals(xid, 
cxid)))
+   {
…

I'll keep looking, but comments are welcome. I'm setting this back to
Needs Review in the meantime.

-- Abhijit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PERFORM] In progress INSERT wrecks plans on table

2013-07-13 Thread Abhijit Menon-Sen
At 2013-07-13 14:19:23 +0530, a...@2ndquadrant.com wrote:

 The timings above are with both xid_in_snapshot_cache.v1.patch and
 cache_TransactionIdInProgress.v2.patch applied

For anyone who wants to try to reproduce the results, here's the patch I
used, which is both patches above plus some typo fixes in comments.

-- Abhijit
diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c
index c2f86ff..660fab0 100644
--- a/src/backend/storage/ipc/procarray.c
+++ b/src/backend/storage/ipc/procarray.c
@@ -840,6 +840,9 @@ TransactionIdIsInProgress(TransactionId xid)
 	TransactionId topxid;
 	int			i,
 j;
+	static	int	pgprocno = -1;	/* cached last pgprocno */
+	static TransactionId pxid;	/* cached last parent xid */
+	static TransactionId cxid;	/* cached last child xid */
 
 	/*
 	 * Don't bother checking a transaction older than RecentXmin; it could not
@@ -875,6 +878,60 @@ TransactionIdIsInProgress(TransactionId xid)
 	}
 
 	/*
+	 * Check to see if we have cached the pgprocno for the xid we seek.
+	 * We will have cached either pxid only or both pxid and cxid.
+	 * So we check to see whether pxid or cxid matches the xid we seek,
+	 * but then re-check just the parent pxid. If the PGXACT doesn't
+	 * match then the transaction must be complete because an xid is
+	 * only associated with one PGPROC/PGXACT during its lifetime
+	 * except when we are using prepared transactions.
+	 * Transaction wraparound is not a concern because we are checking
+	 * the status of an xid we see in data and that might be running;
+	 * anything very old will already be deleted or frozen. So stale
+	 * cached values for pxid and cxid don't affect the correctness
+	 * of the result.
+	 */
+	if (max_prepared_xacts == 0  pgprocno = 0 
+		(TransactionIdEquals(xid, pxid) || TransactionIdEquals(xid, cxid)))
+	{
+		volatile PGXACT *pgxact;
+
+		pgxact = allPgXact[pgprocno];
+
+		pxid = pgxact-xid;
+
+		/*
+		 * XXX Can we test without the lock first? If the values don't
+		 * match without the lock they never will match...
+		 */
+
+		/*
+		 * xid matches, so wait for the lock and re-check.
+		 */
+		LWLockAcquire(ProcArrayLock, LW_SHARED);
+
+		pxid = pgxact-xid;
+
+		if (TransactionIdIsValid(pxid)  TransactionIdEquals(pxid, xid))
+		{
+			LWLockRelease(ProcArrayLock);
+			return true;
+		}
+
+		LWLockRelease(ProcArrayLock);
+
+		pxid = cxid = InvalidTransactionId;
+		return false;
+	}
+
+	/*
+	 * Our cache didn't match, so zero the cxid so that when we reset pxid
+	 * we don't become confused that the cxid and pxid still relate.
+	 * cxid will be reset to something useful later, if approproate.
+	 */
+	cxid = InvalidTransactionId;
+
+	/*
 	 * If first time through, get workspace to remember main XIDs in. We
 	 * malloc it permanently to avoid repeated palloc/pfree overhead.
 	 */
@@ -910,10 +967,12 @@ TransactionIdIsInProgress(TransactionId xid)
 	/* No shortcuts, gotta grovel through the array */
 	for (i = 0; i  arrayP-numProcs; i++)
 	{
-		int			pgprocno = arrayP-pgprocnos[i];
-		volatile PGPROC *proc = allProcs[pgprocno];
-		volatile PGXACT *pgxact = allPgXact[pgprocno];
-		TransactionId pxid;
+		volatile PGPROC *proc;
+		volatile PGXACT *pgxact;
+
+		pgprocno = arrayP-pgprocnos[i];
+		proc = allProcs[pgprocno];
+		pgxact = allPgXact[pgprocno];
 
 		/* Ignore my own proc --- dealt with it above */
 		if (proc == MyProc)
@@ -948,7 +1007,7 @@ TransactionIdIsInProgress(TransactionId xid)
 		for (j = pgxact-nxids - 1; j = 0; j--)
 		{
 			/* Fetch xid just once - see GetNewTransactionId */
-			TransactionId cxid = proc-subxids.xids[j];
+			cxid = proc-subxids.xids[j];
 
 			if (TransactionIdEquals(cxid, xid))
 			{
@@ -975,6 +1034,14 @@ TransactionIdIsInProgress(TransactionId xid)
 	 */
 	if (RecoveryInProgress())
 	{
+		/*
+		 * Hot Standby doesn't use pgprocno, so we can clear the cache.
+		 *
+		 * XXX we could try to remember the offset into the KnownAssignedXids
+		 * array, which is a possibility for another day.
+		 */
+		pxid = cxid = InvalidTransactionId;
+
 		/* none of the PGXACT entries should have XIDs in hot standby mode */
 		Assert(nxids == 0);
 
@@ -999,6 +1066,12 @@ TransactionIdIsInProgress(TransactionId xid)
 	LWLockRelease(ProcArrayLock);
 
 	/*
+	 * After this point we don't remember pgprocno, so the cache is
+	 * no use to us anymore.
+	 */
+	pxid = cxid = InvalidTransactionId;
+
+	/*
 	 * If none of the relevant caches overflowed, we know the Xid is not
 	 * running without even looking at pg_subtrans.
 	 */
@@ -1509,6 +1582,9 @@ GetSnapshotData(Snapshot snapshot)
 
 	snapshot-curcid = GetCurrentCommandId(false);
 
+	/* Initialise the single xid cache for this snapshot */
+	snapshot-xid_in_snapshot = InvalidTransactionId;
+
 	/*
 	 * This is a new snapshot, so set both refcounts are zero, and mark it as
 	 * not copied in persistent memory.
diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c
index 55563ea..dc4bf54 100644
--- a/src/backend/utils/time/tqual.c

Re: [HACKERS] Review: extension template

2013-07-13 Thread Dimitri Fontaine
Hi,

Please find attached a new version (v10) of the patch that fixes the
reported dependencies problems and add some new regression tests to
cover them.

The patch implements the solution we discuted privately with Markus
while at the CHAR(13) conference:

  - create template for extension is now possible even if an extension
is already installed, so that you can install a template for a new
version of the extension;

  - all the scripts used to install an extension are now set as
dependencies so that you can't drop parts of what you need at
restore time;

  - you can create extension for template x version 'y' when you already
had an upgrade path leading to that same version 'y', but only if
your set of parameters for the version 'y' remains the same as
what's already installed in the auxilliary control entry;

  - fix a pg_dump bug by using special dollar quoting $extname$.

Markus Wanner mar...@bluegap.ch writes:
 db1=# CREATE TEMPLATE FOR EXTENSION foo VERSION '0.1' AS $foo$ SELECT 2; 
 $foo$;
 ERROR:  extension foo already exists

Fixed in the attached.

 db1=# DROP TEMPLATE FOR EXTENSION foo FROM '0.0' TO '0.1';
 DROP TEMPLATE FOR EXTENSION

 In this state, extension foo as of version '0.1' is installed, but
 running this through dump  restore, you'll only get back '0.0'.

Fixed in the attached.

 This certainly creates a bad state that leads to an error, when run
 through dump  restore.

Fixed in the attached.

 db1=# DROP TEMPLATE FOR EXTENSION foo VERSION '0.0';
 DROP TEMPLATE FOR EXTENSION

 ... should already err out here ...

Fixed in the attached.

 Another thing that surprised me is the inability to have an upgrade
 script *and* a full version (for the same extension target version).
 Even if that's intended behavior, the error message could be improved:

Fixed in the attached by allowing both to co-exist.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support



templates.v10.patch.gz
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Patch to add support of IF NOT EXISTS to others CREATE statements

2013-07-13 Thread Abhijit Menon-Sen
At 2013-06-12 14:29:59 -0300, fabriziome...@gmail.com wrote:

 The attached patch add support to IF NOT EXISTS to CREATE
 statements listed below: […]

I noticed that this patch was listed as Needs Review with no
reviewers, so I had a quick look.

It applies with a couple of trailing whitespace warnings. Compiles OK.
Documentation changes look fine other than a couple of minor whitespace
errors (literal tabs in some continuation lines).

First, I looked at the changes in src/include: adding if_not_exists to
the relevant parse nodes, and adding ifNotExists parameters to various
functions (e.g. OperatorCreate). The changes look fine. There's a bit
more whitespace quirkiness, though (removed tabs).

Next, changes in src/backend, starting with parser changes: the patch
adds IF_P NOT EXISTS variants for various productions. For example:

src/backend/parser/gram.y:4605:

 DefineStmt:
 CREATE AGGREGATE func_name aggr_args definition
 {
 DefineStmt *n = makeNode(DefineStmt);
 n-kind = OBJECT_AGGREGATE;
 n-oldstyle = false;
 n-defnames = $3;
 n-args = $4;
 n-definition = $5;
 n-if_not_exists = false;
 $$ = (Node *)n;
 }
 | CREATE AGGREGATE IF_P NOT EXISTS func_name aggr_args definition
 {
 DefineStmt *n = makeNode(DefineStmt);
 n-kind = OBJECT_AGGREGATE;
 n-oldstyle = false;
 n-defnames = $6;
 n-args = $7;
 n-definition = $8;
 n-if_not_exists = true;
 $$ = (Node *)n;
 }

Although there is plenty of precedent for this kind of doubling of rules
(e.g. CREATE SCHEMA, CREATE EXTENSION), it doesn't strike me as the best
idea. There's an opt_if_not_exists, and this patch uses it for CREATE
CAST (and it was already used for AlterEnumStmt).

I think opt_if_not_exists should be used for the others as well.

Moving on, the patch adds the if_not_exists field to the relevant
functions in {copyfuncs,equalfuncs}.c and adds stmt-if_not_exists
to the calls to DefineAggregate() etc. in tcop/utility.c. Fine.

 diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
 index 64ca312..851c314 100644
 --- a/src/backend/catalog/heap.c
 +++ b/src/backend/catalog/heap.c
  /* 
 @@ -1219,7 +1220,8 @@ heap_create_with_catalog(const char *relname,
  -1,  /* typmod */
  0,   /* array dimensions for 
 typBaseType */
  false,   /* Type NOT NULL */
 -InvalidOid); /* rowtypes never have a 
 collation */
 +InvalidOid,  /* rowtypes never have a 
 collation */
 +false);

Parameter needs a comment.

 diff --git a/src/backend/catalog/pg_aggregate.c 
 b/src/backend/catalog/pg_aggregate.c
 index e80b600..4452ba3 100644
 --- a/src/backend/catalog/pg_aggregate.c
 +++ b/src/backend/catalog/pg_aggregate.c
 @@ -228,7 +229,7 @@ AggregateCreate(const char *aggName,
  
   procOid = ProcedureCreate(aggName,
 aggNamespace,
 -   false,/* no 
 replacement */
 +   false,/* 
 replacement */
 false,/* 
 doesn't return a set */
 finaltype,
 /* returnType */
 GetUserId(),  
 /* proowner */

What's up with this? We're calling ProcedureCreate with replace==false,
so no replacement sounds correct to me.

 diff --git a/src/backend/catalog/pg_operator.c 
 b/src/backend/catalog/pg_operator.c
 index 3c4fedb..7466e76 100644
 --- a/src/backend/catalog/pg_operator.c
 +++ b/src/backend/catalog/pg_operator.c
 @@ -336,7 +336,8 @@ OperatorCreate(const char *operatorName,
  Oid restrictionId,
  Oid joinId,
  bool canMerge,
 -bool canHash)
 +bool canHash,
 +bool if_not_exists)
  {
   Relationpg_operator_desc;
   HeapTuple   tup;

This should be ifNotExists (to match the header file, and all your
other changes).

 @@ -416,11 +417,18 @@ OperatorCreate(const char *operatorName,
  rightTypeId,
  
 operatorAlreadyDefined);
  
 - if (operatorAlreadyDefined)
 + 

Re: [Review] Re: [HACKERS] minor patch submission: CREATE CAST ... AS EXPLICIT

2013-07-13 Thread Cédric Villemain
  Generally speaking, I agree with Robert's objection.  The patch in
  itself adds only one unnecessary keyword, which probably wouldn't be
  noticeable, but the argument for it implies that we should be willing
  to add a lot more equally-unnecessary keywords, which I'm not.  gram.o
  is already about 10% of the entire postgres executable, which probably
  goes far towards explaining why its inner loop always shows up high in
  profiling: cache misses are routine.  And the size of those tables is
  at least linear in the number of keywords --- perhaps worse than linear,
  I'm not sure.  Adding a bunch of keywords *will* cost us in performance.
  I'm not willing to pay that cost for something that adds neither
  features nor spec compliance.
 
 (1) Here are objective measures of the postgres stripped binary size
 compiled from scratch on my laptop this morning:
 
amd64, gcc version 4.7.3 (Debian 4.7.3-4) 
then gcc version 4.8.1 (Debian 4.8.1-6) 

with no option to configure, I got:

   - without AS EXPLICIT: 5286408 bytes
gram.o:  904936 bytes
keywords.o:   20392 bytes

keywords.o :  20376 bytes 
gram.o:   909240 bytes

keywords.o : 20376 bytes 
gram.o:   900504 bytes

 
   - with AS EXPLICIT   : 5282312 bytes
gram.o:  901632 bytes
keywords.o:   20440 bytes

keywords.o : 20424 bytes 
gram.o:  905904 bytes

keywords.o : 20424 bytes 
gram.o:  897168 bytes

 The executable binary is reduced by 4 KB with AS EXPLICIT, which
 seems to come from some ld flucke really, because the only difference
 I've found are the 8 bytes added to keywords.o. This must be very
 specific to the version and options used with gcc  ld on my laptop.

same here, amd64. gcc to more impact, I didn't tryed with clang.

 As for the general issue with the parser size: I work with languages and
 compilers as a researcher. We had issues at one point with a scanner
 because of too many keywords, and we solved it by removing keywords from
 the scanner and checking them semantically in the parser with a hash
 table, basically as suggested by Andres. The SQL syntax is pretty
 redundant so that there are little choices at each point. Some tools can
 generate perfect hash functions that can help (e.g. gperf). However I'm
 not sure of the actual impact in size and time by following this path, I'm
 just sure that there would be less keywords. IMHO, this issue is
 orthogonal  independent from this patch.
 
 Finally, to answer your question directly, I'm really a nobody here, so
 you can do whatever pleases you with the patch.

I have no strong objection to the patch. It is only decoration and should not 
hurt.
-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation


signature.asc
Description: This is a digitally signed message part.


Re: [HACKERS] proposal: simple date constructor from numeric values

2013-07-13 Thread Pavel Stehule
Hello

2013/7/12 Peter Eisentraut pete...@gmx.net:
 There is a small inconsistency:

 select time '12:30:57.123456789';

 gives

 12:30:57.123457

 but

 select make_time(12, 30, 57.123456789);

 gives

 12:30:57.123456

fixed - see attached patch

Regards

Pavel




make_date-v3.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ecpg prototype mismatch

2013-07-13 Thread Peter Eisentraut
ECPG's pgtypeslib contains two slightly different prototypes for
PGTYPEStimestamp_defmt_scan(), neither of which is in a header file.  I
propose something like the attached patch, although I'm not sure which
header file is the most appropriate one.
diff --git a/src/interfaces/ecpg/pgtypeslib/dt.h b/src/interfaces/ecpg/pgtypeslib/dt.h
index dfe6f9e..d7a1935 100644
--- a/src/interfaces/ecpg/pgtypeslib/dt.h
+++ b/src/interfaces/ecpg/pgtypeslib/dt.h
@@ -348,6 +348,10 @@ void		GetCurrentDateTime(struct tm *);
 int			date2j(int, int, int);
 void		TrimTrailingZeros(char *);
 void		dt2time(double, int *, int *, int *, fsec_t *);
+int			PGTYPEStimestamp_defmt_scan(char **str, char *fmt, timestamp * d,
+			int *year, int *month, int *day,
+			int *hour, int *minute, int *second,
+			int *tz);
 
 extern char *pgtypes_date_weekdays_short[];
 extern char *pgtypes_date_months[];
diff --git a/src/interfaces/ecpg/pgtypeslib/dt_common.c b/src/interfaces/ecpg/pgtypeslib/dt_common.c
index 6b89e4a..112538e 100644
--- a/src/interfaces/ecpg/pgtypeslib/dt_common.c
+++ b/src/interfaces/ecpg/pgtypeslib/dt_common.c
@@ -2595,9 +2595,6 @@
 }
 
 /* XXX range checking */
-int PGTYPEStimestamp_defmt_scan(char **, char *, timestamp *, int *, int *, int *,
-			int *, int *, int *, int *);
-
 int
 PGTYPEStimestamp_defmt_scan(char **str, char *fmt, timestamp * d,
 			int *year, int *month, int *day,
diff --git a/src/interfaces/ecpg/pgtypeslib/timestamp.c b/src/interfaces/ecpg/pgtypeslib/timestamp.c
index 79539c7..d19fc58 100644
--- a/src/interfaces/ecpg/pgtypeslib/timestamp.c
+++ b/src/interfaces/ecpg/pgtypeslib/timestamp.c
@@ -18,9 +18,6 @@
 #include pgtypes_date.h
 
 
-int PGTYPEStimestamp_defmt_scan(char **, const char *, timestamp *, int *, int *, int *,
-			int *, int *, int *, int *);
-
 #ifdef HAVE_INT64_TIMESTAMP
 static int64
 time2t(const int hour, const int min, const int sec, const fsec_t fsec)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pgbench --throttle (submission 7 - with lag measurement)

2013-07-13 Thread Greg Smith

On 6/30/13 2:04 AM, Fabien COELHO wrote:

My guess is the OS. PQfinish or select do/are systems calls that
present opportunities to switch context. I think that the OS is passing
time with other processes on the same host, expecially postgres
backends, when it is not with the client.


I went looking for other instances of this issue in pgbench results, 
that's what I got lost in the last two weeks.  It's subtle because the 
clients normally all end in one very short burst of time, but I have 
found evidence of PQfinish issues elsewhere.  Evidence still seems to 
match the theory that throttling highlights this only because it spreads 
out the ending a bit more.  Also, it happens to be a lot worse on the 
Mac I did initial testing with, and I don't have nearly as many Mac 
pgbench results.


There's a refactoring possible here that seems to make this whole class 
of problem go away.  If I change pgbench so that PQfinish isn't called 
for any client until *all* of the clients are actually finished with 
transactions, the whole issue goes away.  I'm going to package that hack 
the right way into its own little change, revisit the throttling code, 
and then this all should wrap up nicely.  I'd like to get this one out 
of the commitfest so I can move onto looking at something else.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pgbench --throttle (submission 7 - with lag measurement)

2013-07-13 Thread Fabien COELHO


Hello Greg,

There's a refactoring possible here that seems to make this whole class of 
problem go away.  If I change pgbench so that PQfinish isn't called for any 
client until *all* of the clients are actually finished with transactions, 
the whole issue goes away.


Sure. If the explanation is that because of the load the OS is just 
switching to a postgres process while PQfinish is being called, then 
waiting for the end of other transactions means that postgres processes 
don't have anything to do anymore, so process switching is much less 
likely, so nothing would show up... However this is really hiding the 
underlying fact from the measures rather than solving anything, IMHO.



I'm going to package that hack the right way into its own little change,
revisit the throttling code, and then this all should wrap up nicely.


My 0.02€: if it means adding complexity to the pgbench code, I think that 
it is not worth it. The point of pgbench is to look at a steady state, not 
to end in the most graceful possible way as far as measures are concerned. 
On the other end, if it does not add too much complexity, why not!


I'd like to get this one out of the commitfest so I can move onto 
looking at something else.


Thanks.

--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regex pattern with shorter back reference does NOT work as expected

2013-07-13 Thread Tom Lane
I wrote:
 Jeevan Chalke jeevan.cha...@enterprisedb.com writes:
 Following example does not work as expected:
 
 -- Should return TRUE but returning FALSE
 SELECT 'Programmer' ~ '(\w).*?\1' as t;

 This is clearly broken, but I'm uncomfortable with the proposed patch.
 As written, it changes behavior for both the shortest-match-preferred
 and longest-match-preferred cases; but you've offered no evidence that
 the longest-match case is broken.  Maybe it is --- it's sure not
 obvious why it's okay to abandon the search early in this case.  But I
 think we'd have been likely to hear about it before now if there were
 a matching failure in that path, since longest-preferred is so much
 more widely used.

After reflecting on this for awhile, I think that the longest-preferred
case is indeed also wrong in theory, but it's unreachable, which
explains the lack of bug reports.  To get to the no point in trying
again code, we have to have a success of the DFA match followed by a
failure of the cdissect match, which essentially means that the string
would match if we didn't have to constrain some backref to exactly match
the string matched by its referent.  Now, in the longest-preferred case,
the supposed early exit test is end == begin, ie the tentative match
was of zero length overall.  I can't envision any situation in which a
backref constraint could fail given that, because both the referent
pattern piece and the backref piece would have to be matching
zero-length substrings as well.  There could be anchor constraints,
lookahead constraints, and so forth in play, but those would all have
been checked by the DFA, so they're not going to result in cdissect
failures.  Hence, the end == begin test will simply never succeed.

On the other hand, the check made in the shortest-preferred case is
going to succeed if the tentative match was of maximal not minimal
length, and that's certainly a possible case.

So I think your patch is right, although I'd be inclined to refactor
the code to have just one test on shorter, like this:

/* go around and try again, if possible */
if (shorter)
{
if (end == estop)
break;
estart = end + 1;
}
else
{
if (end == begin)
break;
estop = end - 1;
}

so as to make it clearer that we're just defending against selecting an
impossible new estart or estop location.  (Although I argued above that
the end == begin test can't succeed, I wouldn't care to remove it
entirely here.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSOC13 proposal - extend RETURNING syntax

2013-07-13 Thread David Fetter
On Sat, Jul 13, 2013 at 12:49:45AM +0200, Karol Trzcionka wrote:
 Next version:
 - cleanup
 - regression test
 - fix issue reported by johto (invalid values in parallel transactions)
 I would like more feedback and comments about the patch, as some parts
 may be too hacky.
 In particular, is it a problem that I update a pointer to planSlot? In
 my patch, it points to tuple after all updates done between planner and
 executor (in fact it is not planSlot right now). I don't know whether
 the tuple could be deleted in the intervening time and if the pointer
 doesn't point to unreserved memory (I mean - memory which may be
 overwritten by something meanwhile).

Thanks for the updated patch!

Anybody care to look this over for vulnerabilities as described above?

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] checking variadic any argument in parser - should be array

2013-07-13 Thread Andrew Dunstan


On 06/29/2013 03:29 PM, Pavel Stehule wrote:




5. This patch has user visibility, i.e. now we are throwing an error
when
user only says VARIADIC NULL like:

 select concat(variadic NULL) is NULL;

Previously it was working but now we are throwing an error. Well we are
now
more stricter than earlier with using VARIADIC + ANY, so I have no issue
as
such. But I guess we need to document this user visibility change. I
don't
know exactly where though. I searched for VARIADIC and all related
documentation says it needs an array, so nothing harmful as such, so you
can
ignore this review comment but I thought it worth mentioning it.

yes, it is point for possible issues in RELEASE NOTES, I am thinking ???


Well, writer of release notes should be aware of this. And I hope he will
be. So no issue.



Is the behaviour change really unavoidable? Is it really what we want? 
Nobody seems to have picked up on this except the author and the 
reviewer. I'd hate us to do this and then surprise people. I'm not sure 
how many people are using VARIADIC any, but I have started doing so 
and expect to do so more, and I suspect I'm not alone.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] checking variadic any argument in parser - should be array

2013-07-13 Thread Pavel Stehule
Hello

2013/7/14 Andrew Dunstan and...@dunslane.net:

 On 06/29/2013 03:29 PM, Pavel Stehule wrote:



 5. This patch has user visibility, i.e. now we are throwing an error
 when
 user only says VARIADIC NULL like:

  select concat(variadic NULL) is NULL;

 Previously it was working but now we are throwing an error. Well we
 are
 now
 more stricter than earlier with using VARIADIC + ANY, so I have no
 issue
 as
 such. But I guess we need to document this user visibility change. I
 don't
 know exactly where though. I searched for VARIADIC and all related
 documentation says it needs an array, so nothing harmful as such, so
 you
 can
 ignore this review comment but I thought it worth mentioning it.

 yes, it is point for possible issues in RELEASE NOTES, I am thinking
 ???

 Well, writer of release notes should be aware of this. And I hope he
 will
 be. So no issue.



 Is the behaviour change really unavoidable? Is it really what we want?
 Nobody seems to have picked up on this except the author and the reviewer.
 I'd hate us to do this and then surprise people. I'm not sure how many
 people are using VARIADIC any, but I have started doing so and expect to
 do so more, and I suspect I'm not alone.

It doesn't disallow NULL - it disallow nonarray types on this
possition, because there are must be only array type values. Other
possible usage created unambiguous behave.

so SELECT varfx(VARIADIC NULL) -- is disallowed
but SELECT varfx(VARIADIC NULL::text[]) -- is allowed

for example, I can wrote SELECT varfx(10,20,30), but I cannot write
SELECT varfx(VARIADIC 10,20,30) - because this behave should be
undefined.

Can me  send, your use case, where this check is unwanted, please.

The execution of variadic function can be little bit faster, because
this check is moved from execution to parser stage (and it is reason,
why I cannot to check NULL, because I have no simply access to
information about some parameter is constant or not.

It should to fix unwanted behave where VARIADIC keyword was ignored -
I am sure so this is some what we want, but I don't know your
arguments against, so please, send me you use case.

Regards

Pavel




 cheers

 andrew



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers