Re: [HACKERS] [PATCH] Add two-arg for of current_setting(NAME, FALLBACK)

2017-11-01 Thread David G. Johnston
On Thu, Mar 19, 2015 at 3:41 PM, David Christensen 
wrote:

> The two-arg form of the current_setting() function will allow a
> fallback value to be returned instead of throwing an error when an
> unknown GUC is provided.  This would come in most useful when using
> custom GUCs; e.g.:
>
>   -- returns current setting of foo.bar, or 'default' if not set
>   SELECT current_setting('foo.bar', 'default')
>

​​This doesn't actually change the GUC in the system, right?  Do we want a
side-effect version of this?

There is already a two-arg form where the second argument is a boolean -
there needs to be tests that ensure proper behavior and function lookup
resolution.

No docs.

David J.
​


Re: [HACKERS] ALTER COLUMN TYPE vs. domain constraints

2017-11-01 Thread Tom Lane
Michael Paquier <michael.paqu...@gmail.com> writes:
> On Fri, Oct 27, 2017 at 11:15 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> We could consider back-patching the attached to cover this, but
>> I'm not entirely sure it's worth the trouble, because I haven't
>> thought of any non-silly use-cases in the absence of domains
>> over composite.  Comments?

> There are no real complaints about the current behavior, aren't there?
> So only patching HEAD seems enough to me.

Yeah, we can leave it till someone does complain.

> You have added a comment on the constraint to make sure that it
> remains up on basically this ALTER TYPE. Querying pg_obj_description
> would make sure that the comment on the constraint is still here.

Done.

> +RebuildDomainConstraintComment(AlteredTableInfo *tab, int pass, Oid objid,
> +  List *domname, char *conname)
> There is much duplication with RebuildConstraintComment. Why not
> grouping both under say RebuildObjectComment()?

True.  I'd originally expected the code to differ more, but we can
merge these easily enough.

> I would think about
> having cmd->objtype and cmd->object passed as arguments, and then
> remove rel and domname from the existing arguments.

Doing it like that requires the callers to do work (to prepare the object
ID data structure) that's wasted if there's no comment, which most often
there wouldn't be, I suspect.  Seems better to just pass the info the
caller does have and let this function do the rest.

Pushed, thanks for reviewing!

        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add two-arg for of current_setting(NAME, FALLBACK)

2017-11-01 Thread Pavel Stehule
 _null_ _null_ set_config_by_name
> _null_ _null_ _null_ ));
>  DESCR("SET X as a function");
>  DATA(insert OID = 2084 (  pg_show_all_settings PGNSP PGUID 12 1 1000 0 0
> f f f f t t s 0 0 2249 "" 
> "{25,25,25,25,25,25,25,25,25,25,25,1009,25,25,25,23}"
> "{o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}" "{name,setting,unit,category,
> short_desc,extra_desc,context,vartype,source,min_val,max_
> val,enumvals,boot_val,reset_val,sourcefile,sourceline}" _null_
> show_all_settings _null_ _null_ _null_ ));
> diff --git a/src/include/utils/builtins.h b/src/include/utils/builtins.h
> index bc4517d..e3d9fbb 100644
> --- a/src/include/utils/builtins.h
> +++ b/src/include/utils/builtins.h
> @@ -1088,6 +1088,7 @@ extern Datum quote_nullable(PG_FUNCTION_ARGS);
>
>  /* guc.c */
>  extern Datum show_config_by_name(PG_FUNCTION_ARGS);
> +extern Datum show_config_by_name_fallback(PG_FUNCTION_ARGS);
>  extern Datum set_config_by_name(PG_FUNCTION_ARGS);
>  extern Datum show_all_settings(PG_FUNCTION_ARGS);
>
> diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
> index d3100d1..145ad5d 100644
> --- a/src/include/utils/guc.h
> +++ b/src/include/utils/guc.h
> @@ -352,6 +352,7 @@ extern int set_config_option(const char *name, const
> char *value,
>   bool is_reload);
>  extern void AlterSystemSetConfigFile(AlterSystemStmt *setstmt);
>  extern char *GetConfigOptionByName(const char *name, const char
> **varname);
> +extern char *GetConfigOptionByNameFallback(const char *name, const char
> *fallback, const char **varname);
>  extern void GetConfigOptionByNum(int varnum, const char **values, bool
> *noshow);
>  extern int GetNumConfigOptions(void);
>
> diff --git a/src/test/regress/expected/guc.out
> b/src/test/regress/expected/guc.out
> index 4f0065c..905969b 100644
> --- a/src/test/regress/expected/guc.out
> +++ b/src/test/regress/expected/guc.out
> @@ -736,3 +736,22 @@ NOTICE:  text search configuration "no_such_config"
> does not exist
>  select func_with_bad_set();
>  ERROR:  invalid value for parameter "default_text_search_config":
> "no_such_config"
>  reset check_function_bodies;
> +-- check multi-arg custom current_setting
> +-- this should error:
> +select current_setting('nosuch.setting');
> +ERROR:  unrecognized configuration parameter "nosuch.setting"
> +-- this should return 'fallback'
> +select current_setting('nosuch.setting','fallback');
> + current_setting
> +-
> + fallback
> +(1 row)
> +
> +-- this should return 'nada', not 'fallback'
> +set nosuch.setting = 'nada';
> +select current_setting('nosuch.setting','fallback');
> + current_setting
> +-
> + nada
> +(1 row)
> +
> diff --git a/src/test/regress/sql/guc.sql b/src/test/regress/sql/guc.sql
> index 3de8a6b..48c0bed 100644
> --- a/src/test/regress/sql/guc.sql
> +++ b/src/test/regress/sql/guc.sql
> @@ -275,3 +275,15 @@ set default_text_search_config = no_such_config;
>  select func_with_bad_set();
>
>  reset check_function_bodies;
> +
> +-- check multi-arg custom current_setting
> +
> +-- this should error:
> +select current_setting('nosuch.setting');
> +
> +-- this should return 'fallback'
> +select current_setting('nosuch.setting','fallback');
> +
> +-- this should return 'nada', not 'fallback'
> +set nosuch.setting = 'nada';
> +select current_setting('nosuch.setting','fallback');
> --
> 2.3.3
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


[HACKERS] [PATCH] Add two-arg for of current_setting(NAME, FALLBACK)

2017-11-01 Thread David Christensen
ils/builtins.h
@@ -1088,6 +1088,7 @@ extern Datum quote_nullable(PG_FUNCTION_ARGS);
 
 /* guc.c */
 extern Datum show_config_by_name(PG_FUNCTION_ARGS);
+extern Datum show_config_by_name_fallback(PG_FUNCTION_ARGS);
 extern Datum set_config_by_name(PG_FUNCTION_ARGS);
 extern Datum show_all_settings(PG_FUNCTION_ARGS);
 
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index d3100d1..145ad5d 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -352,6 +352,7 @@ extern int set_config_option(const char *name, const char 
*value,
  bool is_reload);
 extern void AlterSystemSetConfigFile(AlterSystemStmt *setstmt);
 extern char *GetConfigOptionByName(const char *name, const char **varname);
+extern char *GetConfigOptionByNameFallback(const char *name, const char 
*fallback, const char **varname);
 extern void GetConfigOptionByNum(int varnum, const char **values, bool 
*noshow);
 extern int GetNumConfigOptions(void);
 
diff --git a/src/test/regress/expected/guc.out 
b/src/test/regress/expected/guc.out
index 4f0065c..905969b 100644
--- a/src/test/regress/expected/guc.out
+++ b/src/test/regress/expected/guc.out
@@ -736,3 +736,22 @@ NOTICE:  text search configuration "no_such_config" does 
not exist
 select func_with_bad_set();
 ERROR:  invalid value for parameter "default_text_search_config": 
"no_such_config"
 reset check_function_bodies;
+-- check multi-arg custom current_setting
+-- this should error:
+select current_setting('nosuch.setting');
+ERROR:  unrecognized configuration parameter "nosuch.setting"
+-- this should return 'fallback'
+select current_setting('nosuch.setting','fallback');
+ current_setting 
+-
+ fallback
+(1 row)
+
+-- this should return 'nada', not 'fallback'
+set nosuch.setting = 'nada';
+select current_setting('nosuch.setting','fallback');
+ current_setting 
+-
+ nada
+(1 row)
+
diff --git a/src/test/regress/sql/guc.sql b/src/test/regress/sql/guc.sql
index 3de8a6b..48c0bed 100644
--- a/src/test/regress/sql/guc.sql
+++ b/src/test/regress/sql/guc.sql
@@ -275,3 +275,15 @@ set default_text_search_config = no_such_config;
 select func_with_bad_set();
 
 reset check_function_bodies;
+
+-- check multi-arg custom current_setting
+
+-- this should error:
+select current_setting('nosuch.setting');
+
+-- this should return 'fallback'
+select current_setting('nosuch.setting','fallback');
+
+-- this should return 'nada', not 'fallback'
+set nosuch.setting = 'nada';
+select current_setting('nosuch.setting','fallback');
-- 
2.3.3



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-11-01 Thread Simon Riggs
On 31 October 2017 at 18:55, Peter Geoghegan <p...@bowt.ie> wrote:
> On Tue, Oct 31, 2017 at 2:25 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> If there are challenges ahead, its reasonable to ask for test cases
>> for that now especially if you think you know what they already are.
>> Imagine we go forwards 2 months - if you dislike my patch when it
>> exists you will submit a test case showing the fault. Why not save us
>> all the trouble and describe that now? Test Driven Development.
>
> I already have, on several occasions now. But if you're absolutely
> insistent on my constructing the test case in terms of a real SQL
> statement, then that's what I'll do.
>
> Consider this MERGE statement, from your mock documentation:
>
> MERGE INTO wines w
> USING wine_stock_changes s
> ON s.winename = w.winename
> WHEN NOT MATCHED AND s.stock_delta > 0 THEN
>   INSERT VALUES(s.winename, s.stock_delta)
> WHEN MATCHED AND w.stock + s.stock_delta > 0 THEN
>   UPDATE SET stock = w.stock + s.stock_delta
> ELSE
>   DELETE;
>
> Suppose we remove the WHEN NOT MATCHED case, leaving us with:
>
> MERGE INTO wines w
> USING wine_stock_changes s
> ON s.winename = w.winename
> WHEN MATCHED AND w.stock + s.stock_delta > 0 THEN
>   UPDATE SET stock = w.stock + s.stock_delta
> ELSE
>   DELETE;
>
> We now have a MERGE that will not INSERT, but will continue to UPDATE
> and DELETE.

Agreed

> (It's implied that your syntax cannot do this at all,
> because you propose use the ON CONFLICT infrastructure, but I think we
> have to imagine a world in which that restriction either never existed
> or has subsequently been lifted.)
>
> The problem here is: Iff the first statement uses ON CONFLICT
> infrastructure, doesn't the absence of WHEN NOT MATCHED imply
> different semantics for the remaining updates and deletes in the
> second version of the query?

Not according to the SQL Standard, no. I have no plans for such
differences to exist.

Spec says: If we hit HeapTupleSelfUpdated then we throw an ERROR.

> You've removed what seems like a neat
> adjunct to the MERGE, but it actually changes everything else too when
> using READ COMMITTED. Isn't that pretty surprising?

I think you're presuming things I haven't said and don't mean, so
we're both surprised.

> If you're not
> clear on what I mean, see my previous remarks on EPQ, live lock, and
> what a CONFLICT could be in READ COMMITTED mode. Concurrent activity
> at READ COMMITTED mode can be expected to significantly alter the
> outcome here.

And I still have questions about what exactly you mean, but at least
this post is going in the right direction and I'm encouraged. Thank
you,

I think we need some way of expressing the problems clearly.

> That is rather frustrating.

Guess so.

>> You've said its possible another way. Show that assertion is actually
>> true. We're all listening, me especially, for the technical details.
>
> My proposal, if you want to call it that, has the merit of actually
> being how MERGE works in every other system. Both Robert and Stephen
> seem to be almost sold on what I suggest, so I think that I've
> probably already explained my position quite well.

The only info I have is

"a general purpose solution is one that more or
less works like an UPDATE FROM, with an outer join, whose ModifyTable
node is capable of insert, update, or delete (and accepts quals for
MATCHED and NOT matched cases, etc). You could still get duplicate
violations due to concurrent activity in READ COMMITTED mode".

Surely the whole point of this is to avoid duplicate violations due to
concurrent activity?

I'm not seeing how either design sketch rigorously avoids live locks,
but those are fairly unlikely and easy to detect and abort.

Thank you for a constructive email, we are on the way to somewhere good.

I have more to add, but wanted to get back to you soonish.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Mapping MERGE onto CTEs (Re: MERGE SQL Statement for PG11)

2017-11-01 Thread Alvaro Herrera
Nico Williams wrote:

> As an aside, I'd like to be able to control which CTEs are view-like and
> which are table-like.  In SQLite3, for example, they are all view-like,
> and the optimizer will act accordingly, whereas in PG they are all
> table-like, and thus optimizer barriers.

There was a short and easy to grasp (OK, maybe not) discussion on the
topic of CTEs acting differently.  I think the consensus is that for
CTEs that are read-only and do not use functions that aren't immutable,
they may be considered for inlining.
https://www.postgresql.org/message-id/5351711493487...@web53g.yandex.ru

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Mapping MERGE onto CTEs (Re: MERGE SQL Statement for PG11)

2017-11-01 Thread Nico Williams
Is it possible to map MERGE onto a query with CTEs that does the the
various DMLs, with all but the last RETURNING?  Here's a sketch:

WITH matched_rows AS (
SELECT FROM  t WHERE 
 ),
 updated_rows AS (
UPDATE  t
SET ...
WHERE ... AND t in (SELECT j FROM matched_rows j)
RETURNING t
 ),
 inserted_rows AS (
INSERT INTO  t
SELECT ...
WHERE ... AND t NOT IN (SELECT j FROM matched_rows j)
RETURNING t
 ),
DELETE FROM  t
WHERE ...;

Now, one issue is that in PG CTEs are basically like temp tables, and
also like optimizer barriers, so this construction is not online, and if
matched_rows is very large, that would be a problem.

As an aside, I'd like to be able to control which CTEs are view-like and
which are table-like.  In SQLite3, for example, they are all view-like,
and the optimizer will act accordingly, whereas in PG they are all
table-like, and thus optimizer barriers.

Nico
-- 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: PANIC: invalid index offnum: 186 when processing BRIN indexes in VACUUM

2017-11-01 Thread Alvaro Herrera
Alvaro Herrera wrote:
> Tomas Vondra wrote:
> 
> > FWIW I can reproduce this on 9.5, and I don't even need to run the
> > UPDATE part. That is, INSERT + VACUUM running concurrently is enough to
> > produce broken BRIN indexes :-(
> 
> Hmm, I'm pretty sure we stress-tested brin in pretty much the same way.
> But I see this misbehavior too.  Looking ...

Turns out that this is related to concurrent growth of the table while
the summarization process is scanning -- so new pages have appeared at
the end of the table after the end point has been determined.  It would
be a pain to determine number of blocks for each range, so I'm looking
for a simple way to fix it without imposing so much overhead.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Account for cost and selectivity of HAVING quals

2017-11-01 Thread Tom Lane
Ashutosh Bapat <ashutosh.ba...@enterprisedb.com> writes:
> On Wed, Nov 1, 2017 at 3:15 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> here's a patch to fix the planner so that eval costs and selectivity of
>> HAVING quals are factored into the appropriate plan node numbers.
>> ...
>> + /* Add cost of qual, if any --- but we ignore its selectivity */

> And may be we should try to explain why can we ignore selectivity.
> Similarly for the changes in create_minmaxagg_path().

I'm sure you realize that's because the estimate is already just one
row ... but sure, we can spell that out.

    regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Document the order of changing certain settings when using hot-standby servers

2017-11-01 Thread Peter Eisentraut
On 9/1/17 13:00, Robert Haas wrote:
> Now you're proposing to add:
> 
> If you want to increase these values you
> should do so on all standby servers first, before applying the changes to
> the primary. If you instead want to decrease these values you should do so
> on the primary first, before applying the changes to all standby servers.
> 
> But that's just the obvious logical consequence of the existing statement.
> 
> If we're going to add this text, I'd move it one sentence earlier and
> stick "Therefore, " at the beginning.

Committed incorporating that suggestion.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Walsender timeouts and large transactions

2017-11-01 Thread Petr Jelinek
path when last keeplaive check happened less
than half of walsender timeout ago, otherwise the slower code path will
be taken.
---
 src/backend/replication/walsender.c | 31 +++
 1 file changed, 19 insertions(+), 12 deletions(-)

diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
index fa1db748b5..79c5153ac7 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -1151,6 +1151,8 @@ static void
 WalSndWriteData(LogicalDecodingContext *ctx, XLogRecPtr lsn, TransactionId xid,
 bool last_write)
 {
+	TimestampTz	now = GetCurrentTimestamp();
+
 	/* output previously gathered data in a CopyData packet */
 	pq_putmessage_noblock('d', ctx->out->data, ctx->out->len);
 
@@ -1160,23 +1162,28 @@ WalSndWriteData(LogicalDecodingContext *ctx, XLogRecPtr lsn, TransactionId xid,
 	 * several releases by streaming physical replication.
 	 */
 	resetStringInfo();
-	pq_sendint64(, GetCurrentTimestamp());
+	pq_sendint64(, now);
 	memcpy(>out->data[1 + sizeof(int64) + sizeof(int64)],
 		   tmpbuf.data, sizeof(int64));
 
-	/* fast path */
-	/* Try to flush pending output to the client */
-	if (pq_flush_if_writable() != 0)
-		WalSndShutdown();
+	/* Try taking fast path unless we get too close to walsender timeout. */
+	if (now < TimestampTzPlusMilliseconds(last_reply_timestamp,
+		  wal_sender_timeout / 2))
+	{
+		CHECK_FOR_INTERRUPTS();
 
-	if (!pq_is_send_pending())
-		return;
+		/* Try to flush pending output to the client */
+		if (pq_flush_if_writable() != 0)
+			WalSndShutdown();
+
+		if (!pq_is_send_pending())
+			return;
+	}
 
 	for (;;)
 	{
 		int			wakeEvents;
 		long		sleeptime;
-		TimestampTz now;
 
 		/*
 		 * Emergency bailout if postmaster has died.  This is to avoid the
@@ -1205,10 +1212,6 @@ WalSndWriteData(LogicalDecodingContext *ctx, XLogRecPtr lsn, TransactionId xid,
 		if (pq_flush_if_writable() != 0)
 			WalSndShutdown();
 
-		/* If we finished clearing the buffered data, we're done here. */
-		if (!pq_is_send_pending())
-			break;
-
 		now = GetCurrentTimestamp();
 
 		/* die if timeout was reached */
@@ -1217,6 +1220,10 @@ WalSndWriteData(LogicalDecodingContext *ctx, XLogRecPtr lsn, TransactionId xid,
 		/* Send keepalive if the time has come */
 		WalSndKeepaliveIfNecessary(now);
 
+		/* If we finished clearing the buffered data, we're done here. */
+		if (!pq_is_send_pending())
+			break;
+
 		sleeptime = WalSndComputeSleeptime(now);
 
 		wakeEvents = WL_LATCH_SET | WL_POSTMASTER_DEATH |
-- 
2.11.0


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commit fest 2017-11

2017-11-01 Thread Michael Paquier
On Wed, Nov 1, 2017 at 9:04 AM, Michael Paquier
<michael.paqu...@gmail.com> wrote:
> Anybody willing to take the hat of the commit fest manager? If nobody,
> I am fine to take the hat as default choice this time.

And now it is open. Let's the fest begin.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup fails on Windows when using tablespace mapping

2017-11-01 Thread Michael Paquier
On Wed, Nov 1, 2017 at 2:24 PM, Peter Eisentraut
<peter.eisentr...@2ndquadrant.com> wrote:
> Committed to master.  I suppose this should be backpatched?

Thanks! Yes this should be back-patched.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup fails on Windows when using tablespace mapping

2017-11-01 Thread Peter Eisentraut
On 9/10/17 00:39, Michael Paquier wrote:
>> Okay. I have once again reviewed your patch and tested it on Windows
>> as well as Linux. The patch LGTM. I am now marking it as Ready For
>> Committer. Thanks.
> 
> Thanks for the review, Ashutosh.

Committed to master.  I suppose this should be backpatched?

(I changed the strcpy() to strlcpy() for better karma.)

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strange relcache.c debug message

2017-11-01 Thread Tom Lane
Alvaro Herrera <alvhe...@2ndquadrant.com> writes:
> While messing with BRIN bugs, I noticed this debug message in the server
> log:
> 2017-11-01 12:33:24.042 CET [361429] DEBUG:  rehashing catalog cache id 14 
> for pg_opclass; 17 tups, 8 buckets en carácter 194
> notice that at the end it says "at character 194".  I suppose that's
> because of some leftover errposition() value, but why is it being
> reported in this message?

IIRC, there are places in the parser that set up an error callback that
will attach an errposition to any message at all that gets reported within
a particular chunk of code.  It's meant to catch something like "name not
found" but could produce this too.

    regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: extend shm_mq to support more use cases

2017-11-01 Thread Craig Ringer
On 1 November 2017 at 21:24, Ildus Kurbangaliev
<i.kurbangal...@postgrespro.ru> wrote:
> Hello! Apparently the current version of shm_mq supports only one sender
> and one receiver. I think it could be very useful to add possibility to
> change senders and receivers. It could be achieved by adding methods
> that remove sender or receiver for mq.

That'd put the complexity penalty on existing shm_mq users. You'd have
to find a way to make it work cheaply for existing 1:1 lightweight
uses.

I'm using shm_mq pretty extensively now, and mostly in one-to-many
two-way patterns between a persistent endpoint and a pool of transient
peers. It's not simple to get it right, there are a lot of traps
introduced by the queue's design for one-shot contexts where
everything is created together and destroyed together.

I think it'd make a lot of sense to make this easier, but I'd be
inclined to do it by layering on top of shm_mq rather than extending
it.

One thing that would make such patterns much easier would be a way to
safely reset a queue for re-use in a race-free way that didn't need
the use of any external synchronisation primitives. So once the
initial queues are set up, a client can attach, exchange messages on a
queue pair, and detach. And the persistent endpoint process can reset
the queues for re-use. Attempting to attach to a busy queue, or one
recently detached from, should fail gracefully.

Right now it's pretty much necessary to have an per-queue spinlock or
similar that you take before you overwrite a queue and re-create it.
And you need is-in-use flags for both persistent side and transient
side peers. That way the persistent side knows for sure there's no
transient side still attached to the queue when it overwrites it, and
so the transient side knows not to attempt to attach to a queue that's
already been used and detached by someone else because that Assert()s.
And you need the persistent side in-use flag so the transient side
knows the queue exists and is ready to be attached to, and it doesn't
try to attach to a queue that's in the process of being overwritten.

The issues I've had are:

* When one side detaches it can't tell if the other side is still
attached, detached, or has never attached. So you need extra book
keeping to find out "once I've detached, is it possible the other peer
could've attached and not yet detached" and ensure that no peer could
be attached when you zero and overwrite a queue.

* Attempting to set the receive/send proc and attach to an
already-attached queue Assert()s, there's no "try and fail
gracefully". So you need extra book keeping to record "this queue slot
has been used up, detached from, and needs to be reset". You can't
just try a queue until you find a free one, or retry your queue slot
until it's reset, or whatever.

* Receive/send proc is tracked by PGPROC not pid, and those slots are
re-used much faster and more predictably. So failures where you
attempt to set yourself as sender/receiver for a queue that's already
had a sender/receiver set can produce confusing errors.


I'd like to:

* Zero the PGPROC* for the sender when it detaches, and the receiver
when it detaches

* When the queue is marked as peer-detached but the local PGPROC is
still attached, have a function that takes the queue spinlock and
resets the queue to not-detached state with the local proc still
attached, ready for re-use by attaching a new peer.

* (At least optionally) return failure code when attempting to set
sender or receiver on a detached queue, not assert. So you can wait
and try again later when the queue is reset and ready for re-use, or
scan to find another free queue slot to attach to.

If there's a need to keep the sender and receiver PGPROC after detach
we could instead make the detached bool into a flag of
RECEIVER_DETACHED|SENDER_DETACHED. However, this adds
read-modify-write cycles to what's otherwise a simple set, so the
spinlock must be taken on detach an atomic must be used. So I'd rather
just blindly clear the PGPROC pointer for whichever side is detaching.

These changes would make using shm_mq persistently MUCH easier,
without imposing significant cost on existing users. And it'd make it
way simpler to build a layer on top for a 1:m 2-way comms system like
Ildus is talking about.


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PostgreSQL 10 parenthesized single-column updates can produce errors

2017-11-01 Thread Rob McColl
Between 9.6.5 and 10, the handling of parenthesized single-column UPDATE
statements changed. In 9.6.5, they were treated identically to
unparenthesized single-column UPDATES. In 10, they are treated as
multiple-column updates.  This results in this being valid in Postgres
9.6.5, but an error in Postgres 10:

CREATE TABLE test (id INT PRIMARY KEY, data INT);
INSERT INTO test VALUES (1, 1);
UPDATE test SET (data) = (2) WHERE id = 1;

In 10 and the current master, this produces the error:

errmsg("source for a multiple-column UPDATE item must be a sub-SELECT or
ROW() expression")

I believe that this is not an intended change or behavior, but is instead
an unintentional side effect of 906bfcad7ba7cb3863fe0e2a7810be8e3cd84fbd
Improve handling of "UPDATE ... SET (column_list) = row_constructor". (
https://github.com/postgres/postgres/commit/906bfcad7ba7cb3863fe0e2a7810be
8e3cd84fbd).

This is a small patch to the grammar that restores the previous behavior by
adding a rule to the set_clause rule and modifying the final rule of the
set_clause rule to only match lists of more then one element.  I'm not sure
if there are more elegant or preferred ways to address this.

Compiled and tested on Ubuntu 17.04 Linux 4.10.0-33-generic x86_64.

Regression test added under the update test to cover the parenthesized
single-column case.

I see no reason this would affect performance.

Thanks,
-rob

-- 
Rob McColl
@robmccoll
r...@robmccoll.com
205.422.0909 <(205)%20422-0909>


[HACKERS] proposal: extend shm_mq to support more use cases

2017-11-01 Thread Ildus Kurbangaliev
Hello! Apparently the current version of shm_mq supports only one sender
and one receiver. I think it could be very useful to add possibility to
change senders and receivers. It could be achieved by adding methods
that remove sender or receiver for mq.

As one of actual use cases can be some extension that creates several
background workers so backends can communicate with them. At the
startup the extension will create two mq for each bgworker (one for
input data, one for output). When a backend wants to work with the
worker it connects to one mq as sender, and to other as receiver.
After some working with bgworker it disconnects from mqs and the 
worker becomes free for another backend. So the workers can act like a
cache, or keep some long connections with other services and so on.

-- 
---
Ildus Kurbangaliev
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Partition-wise aggregation/grouping

2017-11-01 Thread Jeevan Chalke
On Sat, Oct 28, 2017 at 3:07 PM, Robert Haas  wrote:

> On Fri, Oct 27, 2017 at 1:01 PM, Jeevan Chalke
>  wrote:
> > 1. Added separate patch for costing Append node as discussed up-front in
> the
> > patch-set.
> > 2. Since we now cost Append node, we don't need
> > partition_wise_agg_cost_factor
> > GUC. So removed that. The remaining patch hence merged into main
> > implementation
> > patch.
> > 3. Updated rows in test-cases so that we will get partition-wise plans.
>
> With 0006 applied, cost_merge_append() is now a little bit confused:
>
> /*
>  * Also charge a small amount (arbitrarily set equal to operator cost)
> per
>  * extracted tuple.  We don't charge cpu_tuple_cost because a
> MergeAppend
>  * node doesn't do qual-checking or projection, so it has less overhead
>  * than most plan nodes.
>  */
> run_cost += cpu_operator_cost * tuples;
>
> /* Add MergeAppend node overhead like we do it for the Append node */
> run_cost += cpu_tuple_cost * DEFAULT_APPEND_COST_FACTOR * tuples;
>
> The first comment says that we don't add cpu_tuple_cost, and the
> second one then adds half of it anyway.
>

Yep.
But as David reported earlier, if we remove the first part i.e. adding
cpu_operator_cost per tuple, Merge Append will be preferred over an Append
node unlike before. And thus, I thought of better having both, but no so
sure. Should we remove that part altogether, or add both in a single
statement with updated comments?


> I think it's fine to have a #define for DEFAULT_APPEND_COST_FACTOR,
> because as you say it's used twice, but I don't think that should be
> exposed in cost.h; I'd make it private to costsize.c and rename it to
> something like APPEND_CPU_COST_MULTIPLIER.  The word DEFAULT, in
> particular, seems useless to me, since there's no provision for it to
> be overridden by a different value.
>

Agree. Will make that change.


>
> What testing, if any, can we think about doing with this plan to make
> sure it doesn't regress things?  For example, if we do a TPC-H run
> with partitioned tables and partition-wise join enabled, will any
> plans change with this patch?


I have tried doing this on my local developer machine. For 1GB database
size (tpc-h scale factor 1), I see no plan change with and without this
patch.

I have tried with scale factor 10, but query is not executing well due to
space and memory constraints. Can someone try out that?


>   Do they get faster or not?  Anyone have
> other ideas for what to test?
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


-- 
Jeevan Chalke
Technical Architect, Product Development
EnterpriseDB Corporation
The Enterprise PostgreSQL Company


Re: [HACKERS] Oracle to PostGre

2017-11-01 Thread Chris Travers
As a brief note, this is probably not the best list for this.  You would do
better to ask questions like this on -general where you have more
application developers and so forth.  This is more of an SQL question so
asking people who are hacking the codebase may not be the best way to get
it answered.

Also, it is Postgres or PostgreSQL.  People will assume you are totally new
if you call it Postgre.

On Wed, Nov 1, 2017 at 12:55 PM, Brahmam Eswar 
wrote:

> Hi,
>
> App is moving to Postgre from Oracel . After migrating the store procedure
> is throwing an error with collection type.
>
> *Oracle :*
>
> create or replace PROCEDURE"PROC1"
>  (
>
>  , REQ_CURR_CODE IN VARCHAR2
>  , IS_VALID OUT VARCHAR2
>  , ERROR_MSG OUT VARCHAR2
>  ) AS
>
>
>TYPE INV_LINES_RT IS RECORD(
>  VENDOR_NUM AP.CREATURE_TXN_LINE_ITEMS.VENDOR_NUM%TYPE,
>  VENDOR_SITE_CODE AP.CREATURE_TXN_LINE_ITEMS.
> VENDOR_SITE_CODE%TYPE,
>  INVOICE_NUM AP.CREATURE_TXN_LINE_ITEMS.INVOICE_NUM%TYPE,
>  TXN_CNT NUMBER
>);
>TYPE INV_LINES_T IS TABLE OF INV_LINES_RT;
>L_INV_LINES INV_LINES_T;
>IS_MULTI_VENDOR FINO_APRVL_BUS_CHANN_DEFAULTS.
> MULTI_VENDOR_REQ_ALLOWED%TYPE;
>BUS_CHANNEL_RECORD FINO_APRVL_BUS_CHANN_DEFAULTS%ROWTYPE;
> CAL_APRVL_AMT_BY_TOTAL_AMT FINO_APRVL_BUS_CHANN_DEFAULTS.
> CAL_APR_AMT_BY_INV%TYPE;
>
>
> Postgre :
>
> create or replace FUNCTION AP.VALIDATE_CRTR_LINE_ITEMS
> (
> REQ_CURR_CODE IN VARCHAR,
> IS_VALID OUT VARCHAR,
> ERROR_MSG OUT VARCHAR
> ) AS $$
>
> DECLARE
>
> INV_LINES_T ap.validate_crtr_line_items$inv_lines_rt ARRAY;
> L_INV_LINES INV_LINES_T%TYPE;
> L_INV_LINES$temporary_record ap.validate_crtr_line_items$inv_lines_rt;
> IS_MULTI_VENDOR AP.FINO_APRVL_BUS_CHANN_DEFAULTS.MULTI_VENDOR_REQ_
> ALLOWED%TYPE;
> BUS_CHANNEL_RECORD ap.fino_aprvl_bus_chann_defaults%ROWTYPE;
>  CAL_APRVL_AMT_BY_TOTAL_AMT AP.FINO_APRVL_BUS_CHANN_
> DEFAULTS.CAL_APR_AMT_BY_INV%TYPE;
>
>
> but it's throwing an error as : 0 SQLState: 42P01 Message: ERROR:
> relation "l_inv_lines" does not exist
> --
> Thanks & Regards,
> Brahmeswara Rao J.
>

When you ask on -general, please include the query which is actually
causing the problem.  My guess is that either you didn't declare the type
properly or there is some other error in your function, but the information
provided is not sufficient to answer it.

Best or luck asking on -general.

-- 
Best Regards,
Chris Travers
Database Administrator

Tel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com
Saarbrücker Straße 37a, 10405 Berlin


Re: [HACKERS] Account for cost and selectivity of HAVING quals

2017-11-01 Thread Ashutosh Bapat
On Wed, Nov 1, 2017 at 3:15 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Pursuant to the discussion at
> https://www.postgresql.org/message-id/20171029112420.8920b5f...@mx.zeyos.com
> here's a patch to fix the planner so that eval costs and selectivity of
> HAVING quals are factored into the appropriate plan node numbers.
> Perhaps unsurprisingly, this doesn't change the results of any
> existing regression tests.
>
> It's slightly annoying that this approach will result in calculating
> the eval costs and selectivity several times, once for each aggregation
> plan type we consider.  I thought about inserting RestrictInfo nodes
> into the havingQual so that those numbers could be cached, but that turned
> out to break various code that doesn't expect to see such nodes there.
> I'm not sure it's worth going to the trouble of fixing that; in the big
> scheme of things, the redundant calculations likely don't cost much, since
> we aren't going to have relevant statistics.
>
> Comments?  If anyone wants to do a real review of this, I'm happy to stick
> it into the upcoming CF; but without an expression of interest, I'll just
> push it.  I don't think there's anything terribly controversial here.
>

I am not able to see how is the following hunk related to $subject
*** create_result_path(PlannerInfo *root, Re
*** 1374,1379 
--- 1374,1380 
  pathnode->path.startup_cost = target->cost.startup;
  pathnode->path.total_cost = target->cost.startup +
  cpu_tuple_cost + target->cost.per_tuple;
+ /* Add cost of qual, if any --- but we ignore its selectivity */
  if (resconstantqual)
  {
  QualCostqual_cost;

And may be we should try to explain why can we ignore selectivity.
Similarly for the changes in create_minmaxagg_path().

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Try to fix endless loop in ecpg with informix mode

2017-11-01 Thread Michael Meskes
> Any comments?

Sorry, I've been working through the backlog of three weeks of
traveling.

> > I tried some tests with ecpg informix mode.
> > When trying to store float data into a integer var, I got endless
> > loop.
> > 
> > The reason is: 
> > In informix mode, ecpg can accept
> > string form of float number when processing query result.
> > During checking the string form of float number, it seems
> > that ecpg forgot to skip characters after '.'.
> > Then outer loop will never stop because it hopes to see '\0'.
> > 
> > The first patch will reproduce the problem in ecpg's regress test.
> > The second patch tries to fix it in simple way.

Thanks for spotting and fixing. I changed your patch slightly and made
it check if the rest of the data is indeed digits, or else it would
accept something like "7.hello" as "7". 

Committed.

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Meskes at (Debian|Postgresql) dot Org
Jabber: michael at xmpp dot meskes dot org
VfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Try to fix endless loop in ecpg with informix mode

2017-11-01 Thread Julien Rouhaud
On Wed, Nov 1, 2017 at 12:22 PM, 高增琦 <pgf...@gmail.com> wrote:
> Any comments?
>


Hi,

You should register these patches for the next commitfest at
https://commitfest.postgresql.org/15/. As Michael pointed out earlier,
this commitfest will start soon so you should add your patches
quickly.

Regards.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Oracle to PostGre

2017-11-01 Thread Brahmam Eswar
Hi,

App is moving to Postgre from Oracel . After migrating the store procedure
is throwing an error with collection type.

*Oracle :*

create or replace PROCEDURE"PROC1"
 (

 , REQ_CURR_CODE IN VARCHAR2
 , IS_VALID OUT VARCHAR2
 , ERROR_MSG OUT VARCHAR2
 ) AS


   TYPE INV_LINES_RT IS RECORD(
 VENDOR_NUM AP.CREATURE_TXN_LINE_ITEMS.VENDOR_NUM%TYPE,
 VENDOR_SITE_CODE AP.CREATURE_TXN_LINE_ITEMS.VENDOR_SITE_CODE%TYPE,
 INVOICE_NUM AP.CREATURE_TXN_LINE_ITEMS.INVOICE_NUM%TYPE,
 TXN_CNT NUMBER
   );
   TYPE INV_LINES_T IS TABLE OF INV_LINES_RT;
   L_INV_LINES INV_LINES_T;
   IS_MULTI_VENDOR
FINO_APRVL_BUS_CHANN_DEFAULTS.MULTI_VENDOR_REQ_ALLOWED%TYPE;
   BUS_CHANNEL_RECORD FINO_APRVL_BUS_CHANN_DEFAULTS%ROWTYPE;
CAL_APRVL_AMT_BY_TOTAL_AMT
FINO_APRVL_BUS_CHANN_DEFAULTS.CAL_APR_AMT_BY_INV%TYPE;


Postgre :

create or replace FUNCTION AP.VALIDATE_CRTR_LINE_ITEMS
(
REQ_CURR_CODE IN VARCHAR,
IS_VALID OUT VARCHAR,
ERROR_MSG OUT VARCHAR
) AS $$

DECLARE

INV_LINES_T ap.validate_crtr_line_items$inv_lines_rt ARRAY;
L_INV_LINES INV_LINES_T%TYPE;
L_INV_LINES$temporary_record ap.validate_crtr_line_items$inv_lines_rt;
IS_MULTI_VENDOR
AP.FINO_APRVL_BUS_CHANN_DEFAULTS.MULTI_VENDOR_REQ_ALLOWED%TYPE;
BUS_CHANNEL_RECORD ap.fino_aprvl_bus_chann_defaults%ROWTYPE;
 CAL_APRVL_AMT_BY_TOTAL_AMT
AP.FINO_APRVL_BUS_CHANN_DEFAULTS.CAL_APR_AMT_BY_INV%TYPE;


but it's throwing an error as : 0 SQLState: 42P01 Message: ERROR: relation
"l_inv_lines" does not exist
-- 
Thanks & Regards,
Brahmeswara Rao J.


Re: [HACKERS] Re: proposal - psql: possibility to specify sort for describe commands, when size is printed

2017-11-01 Thread Pavel Stehule
(output, _("  \\di[Ssd+] [PATTERN]list indexes\n"));
 	fprintf(output, _("  \\dllist large objects, same as \\lo_list\n"));
 	fprintf(output, _("  \\dL[S+] [PATTERN]  list procedural languages\n"));
-	fprintf(output, _("  \\dm[S+] [PATTERN]  list materialized views\n"));
+	fprintf(output, _("  \\dm[Ssd+] [PATTERN]list materialized views\n"));
 	fprintf(output, _("  \\dn[S+] [PATTERN]  list schemas\n"));
 	fprintf(output, _("  \\do[S]  [PATTERN]  list operators\n"));
 	fprintf(output, _("  \\dO[S+] [PATTERN]  list collations\n"));
@@ -253,13 +253,13 @@ slashUsage(unsigned short int pager)
 	fprintf(output, _("  \\dRp[+] [PATTERN]  list replication publications\n"));
 	fprintf(output, _("  \\dRs[+] [PATTERN]  list replication subscriptions\n"));
 	fprintf(output, _("  \\ds[S+] [PATTERN]  list sequences\n"));
-	fprintf(output, _("  \\dt[S+] [PATTERN]  list tables\n"));
+	fprintf(output, _("  \\dt[Ssd+] [PATTERN]list tables\n"));
 	fprintf(output, _("  \\dT[S+] [PATTERN]  list data types\n"));
 	fprintf(output, _("  \\du[S+] [PATTERN]  list roles\n"));
 	fprintf(output, _("  \\dv[S+] [PATTERN]  list views\n"));
 	fprintf(output, _("  \\dx[+]  [PATTERN]  list extensions\n"));
 	fprintf(output, _("  \\dy [PATTERN]  list event triggers\n"));
-	fprintf(output, _("  \\l[+]   [PATTERN]  list databases\n"));
+	fprintf(output, _("  \\l[sd+] [PATTERN]  list databases\n"));
 	fprintf(output, _("  \\sf[+]  FUNCNAME   show a function's definition\n"));
 	fprintf(output, _("  \\sv[+]  VIEWNAME   show a view's definition\n"));
 	fprintf(output, _("  \\z  [PATTERN]  same as \\dp\n"));

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] strange relcache.c debug message

2017-11-01 Thread Alvaro Herrera
While messing with BRIN bugs, I noticed this debug message in the server
log:

2017-11-01 12:33:24.042 CET [361429] DEBUG:  rehashing catalog cache id 14 for 
pg_opclass; 17 tups, 8 buckets en carácter 194

notice that at the end it says "at character 194".  I suppose that's
because of some leftover errposition() value, but why is it being
reported in this message?

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Try to fix endless loop in ecpg with informix mode

2017-11-01 Thread 高增琦
Any comments?

2017-10-26 16:03 GMT+08:00 高增琦 :

> Hi,
>
> I tried some tests with ecpg informix mode.
> When trying to store float data into a integer var, I got endless loop.
>
> The reason is:
> In informix mode, ecpg can accept
> string form of float number when processing query result.
> During checking the string form of float number, it seems
> that ecpg forgot to skip characters after '.'.
> Then outer loop will never stop because it hopes to see '\0'.
>
> The first patch will reproduce the problem in ecpg's regress test.
> The second patch tries to fix it in simple way.
>
> --
> GaoZengqi
> pgf...@gmail.com
> zengqi...@gmail.com
>



-- 
GaoZengqi
pgf...@gmail.com
zengqi...@gmail.com


Re: [HACKERS] Dynamic result sets from procedures

2017-11-01 Thread Pavel Stehule
2017-10-31 22:08 GMT+01:00 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com>:

> This patch is more of a demo of what could be done, not my primary
> focus, but if there is interest and some assistance, maybe we can make
> something out of it.  This patch also goes on top of "SQL procedures"
> version 1.
>
> The purpose is to return multiple result sets from a procedure.  This
> is, I think, a common request when coming from MS SQL and DB2.  MS SQL
> has a completely different procedure syntax, but this proposal is
> compatible with DB2, which as usual was the model for the SQL standard.
> So this is what it can do:
>
> CREATE PROCEDURE pdrstest1()
> LANGUAGE SQL
> AS $$
> DECLARE c1 CURSOR WITH RETURN FOR SELECT * FROM cp_test2;
> DECLARE c2 CURSOR WITH RETURN FOR SELECT * FROM cp_test3;
> $$;
>
> CALL pdrstest1();
>
> and that returns those two result sets to the client.
>
> That's all it does for now.  Things get more complex when you consider
> nested calls.  The SQL standard describes additional facilities how an
> outer procedure can accept a called procedure's result sets, or not.  In
> the thread on transaction control, I mentioned that we might need some
> kind of procedure call stack.  Something like that would be needed here
> as well.  There are also probably some namespacing issues around the
> cursors that need more investigation.
>
> A more mundane issue is how we get psql to print multiple result sets.
> I have included here a patch that does that, and you can see that new
> result sets start popping up in the regression tests already.  There is
> also one need error that needs further investigation.
>
> We need to think about how the \timing option should work in such
> scenarios.  Right now it does
>
> start timer
> run query
> fetch result
> stop timer
> print result
>
> If we had multiple result sets, the most natural flow would be
>
> start timer
> run query
> while result sets
> fetch result
> print result
> stop timer
> print time
>
> but that would include the printing time in the total time, which the
> current code explicitly does not.  We could also temporarily save the
> result sets, like
>
> start timer
> run query
> while result sets
> fetch result
> stop timer
> foreach result set
> print result
>
> but that would have a lot more overhead, potentially.
>
> Thoughts?
>

Has the total time sense  in this case?

should not be total time related to any fetched result?

Regards

Pavel


> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


Re: [HACKERS] Adding column_constraint description in ALTER TABLE synopsis

2017-11-01 Thread Lætitia Avrot
Hi all,

Thanks Stephen for the suggestion. I wan't thinking globally enough. I was
planning to look at it today but Amit was faster. So thanks Amit too!

Have a nice day (UGT),

Lætitia

2017-11-01 1:35 GMT+01:00 Amit Langote :

> On 2017/10/31 21:31, Stephen Frost wrote:
> > * Lætitia Avrot (laetitia.av...@gmail.com) wrote:
> >> As Amit Langot pointed out, the column_constraint definition is missing
> >> whereas it is used in ALTER TABLE synopsis. It can be easily found in
> the
> >> CREATE TABLE synopsis, but it's not very user friendly.
> >
> > Thanks, this looks pretty reasonable, but did you happen to look for any
> > other keywords in the ALTER TABLE that should really be in ALTER TABLE
> > also?
> >
> > I'm specifically looking at, at least, partition_bound_spec.  Maybe you
> > could propose an updated patch which addresses that also, and any other
> > cases you find?
>
> Ah, yes.  I remember having left out partition_bound_spec simply because I
> thought it was kind of how it was supposed to be done, seeing that neither
> column_constraint and table_constraint were expanded in the ALTER TABLE's
> synopsis.
>
> It seems that there are indeed a couple of other things that need to be
> brought over to ALTER TABLE synopsis including partition_bound_spec.
> 9f295c08f877 [1] added table_constraint, but missed to add the description
> of index_parameters and exclude_element which are referenced therein.
>
> Attached find updated version of the Lætitia's patch.
>
> Thanks,
> Amit
>
> [1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=9f295c
>


[HACKERS] Commit fest 2017-11

2017-11-01 Thread Michael Paquier
Hi all,

At the moment of writing this email, it is 9PM AoE (Anywhere on Earth)
31st of October. This means that the next commit fest will begin in 3
hours, and that any hackers willing to register patches for this
commit fest have roughly three hours to do so (plus/minus N hours).

This current score is the following:
Needs review: 125.
Waiting on Author: 22.
Ready for Committer: 39.
This represents a total of 186 patches still pending for review, for
the second largest commit fest ever.

Anybody willing to take the hat of the commit fest manager? If nobody,
I am fine to take the hat as default choice this time.

Thanks,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Restricting pg_rewind to data/wal dirs

2017-11-01 Thread Chris Travers
nd ablespaces)
>
> + * wal/transaction data
>
> + * and that is it.
>
> + *
>
> + * This array is null-terminated to make
>
> + * it easy to expand
>
> + */
>
> +
>
> +const char *rewind_dirs[] = {
>
> +"base",
>
> +"global",
>
> +"pg_commit_ts",
>
> +"pg_logical",
>
> +"pg_multixact",
>
> +"pg_serial",
>
> +"pg_subtrans",
>
> +"pg_tblspc",
>
> +"pg_twophase",
>
> +"pg_wal",
>
> +"pg_xact",
>
> +NULL
>
> +};
>
>
> From there we iterate over this array for directory-based approaches in
> copy_fetch.c and with one query per directory in libpq_fetch.c.  This also
> means shifting from the basic interface from PQexec to PQexecParams.  It
> would be possible to move to binary formats too, but this was not done
> currently in order to simplify code review (that could be a separate
> independent patch at a later time).
>
> Testing Done:
>
> The extra files tests correctly test this change in behaviour.  The tests
> have been modified, and diagnostics in cases of failures expanded, in this
> case.  The other tests provide good coverage of whether pg_rewind does what
> it is supposed to do.
>
> Cleanup still required:
>
> I accidentally left Carp::Always in the PM in this perl module.  It will
> be fixed.
>
> I renamed one of the functions used to have a more descriptive name but
> currently did not remove the old function yet.
>
>
> Feedback is very welcome.  pg_rewind is a very nice piece of software.  I
> am hoping that these sorts of changes will help ensure that it is easier to
> use and provides more predictable results.
> --
> Best Regards,
> Chris Travers
> Database Administrator
>
> Tel: +49 162 9037 210 <+49%20162%209037210> | Skype: einhverfr |
> www.adjust.com
> Saarbrücker Straße 37a, 10405 Berlin
>
>


-- 
Best Regards,
Chris Travers
Database Administrator

Tel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com
Saarbrücker Straße 37a, 10405 Berlin


pg_rewind_restrict_dirs.v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: restrict pg_rewind to whitelisted directories

2017-11-01 Thread Chris Travers
On Tue, Oct 31, 2017 at 1:38 PM, Robert Haas  wrote:

> On Mon, Oct 30, 2017 at 6:44 PM, Chris Travers 
> wrote:
> > The attached patch is cleaned up and filed for the commit fest this next
> > month:
>
> It's generally better to post the patch on the same message as the
> discussion thread, or at least link back to the discussion thread from
> the new one.  Otherwise people may have trouble understanding the
> history.
>

Fair point.  Mostly concerned about the WIP marker there. This is my first
time around this.

I will then retire this thread and attach the patch on the other one.

>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>



-- 
Best Regards,
Chris Travers
Database Administrator

Tel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com
Saarbrücker Straße 37a, 10405 Berlin


Re: [HACKERS] Account for cost and selectivity of HAVING quals

2017-11-01 Thread Tels
Hello David,

On Tue, October 31, 2017 7:54 pm, David G. Johnston wrote:
> On Tue, Oct 31, 2017 at 4:31 PM, Tels <nospam-pg-ab...@bloodgate.com>
> wrote:
>
>>
>> ​​
>> That looks odd to me, it first uses output_tuples in a formula, then
>> overwrites the value with a new value. Should these lines be swapped?
>>
>
> ​IIUC it is correct: the additional total_cost comes from processing every
> output group to check whether it is qualified - since every group is
> checked the incoming output_tuples from the prior grouping is used.  The
> side-effect of the effort is that the number of output_tuples has now been
> reduced to only those matching the qual - and so it now must take on a new
> value to represent this.

Ah, makes sense. Learned something new today.

Maybe it's worth to add a comment, or would everybody else beside me
understand it easily by looking at the code? :)

Thank you,

Tels


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: schema variables

2017-10-31 Thread Pavel Stehule
2017-11-01 6:07 GMT+01:00 Serge Rielau :

> "Although the syntax of CREATE TEMPORARY TABLE resembles that of the SQL
> standard, the effect is not the same. In the standard, temporary tables are
> defined just once and automatically exist (starting with empty contents) in
> every session that needs them. PostgreSQL instead requires each session
> to issue its own CREATE TEMPORARY TABLE command for each temporary table
> to be used. This allows different sessions to use the same temporary table
> name for different purposes, whereas the standard's approach constrains all
> instances of a given temporary table name to have the same table structure.”
> Yeah, that’s a DECLAREd table in my book. No wonder we didn’t link up.
>

This is known discussion about local / global temp tables in PostgresSQL.
And ToDo point: implementation of global temp tables in Postgres.

This temporary behave is marginal part of proposal - so I can to remove it
from proposal - and later open discussion about CREATE TEMPORARY VARIABLE
versus DECLARE VARIABLE

Regards

Pavel

Serge

>


Re: [HACKERS] proposal: schema variables

2017-10-31 Thread Serge Rielau
" Although the syntax of CREATE TEMPORARY TABLE resembles that of the SQL 
standard, the effect is not the same. In the standard, temporary tables are 
defined just once and automatically exist (starting with empty contents) in 
every session that needs them. PostgreSQL instead requires each session to 
issue its own CREATE TEMPORARY TABLE command for each temporary table to be 
used. This allows different sessions to use the same temporary table name for 
different purposes, whereas the standard's approach constrains all instances of 
a given temporary table name to have the same table structure.”
Yeah, that’s a DECLAREd table in my book. No wonder we didn’t link up.
Cheers Serge

Re: [HACKERS] REINDEX CONCURRENTLY 2.0

2017-10-31 Thread Andreas Karlsson

Here is a rebased version of the patch.

Andreas
diff --git a/doc/src/sgml/mvcc.sgml b/doc/src/sgml/mvcc.sgml
index a0ca2851e5..f8c59ea127 100644
--- a/doc/src/sgml/mvcc.sgml
+++ b/doc/src/sgml/mvcc.sgml
@@ -926,6 +926,7 @@ ERROR:  could not serialize access due to read/write dependencies among transact
 
  Acquired by VACUUM (without FULL),
  ANALYZE, CREATE INDEX CONCURRENTLY,
+ REINDEX CONCURRENTLY,
  CREATE STATISTICS and
  ALTER TABLE VALIDATE and other
  ALTER TABLE variants (for full details see
diff --git a/doc/src/sgml/ref/reindex.sgml b/doc/src/sgml/ref/reindex.sgml
index 2e053c4c24..4019bad4c2 100644
--- a/doc/src/sgml/ref/reindex.sgml
+++ b/doc/src/sgml/ref/reindex.sgml
@@ -21,7 +21,7 @@ PostgreSQL documentation
 
  
 
-REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } name
+REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } [ CONCURRENTLY ] name
 
  
 
@@ -67,10 +67,7 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } 
   An index build with the CONCURRENTLY option failed, leaving
   an invalid index. Such indexes are useless but it can be
-  convenient to use REINDEX to rebuild them. Note that
-  REINDEX will not perform a concurrent build. To build the
-  index without interfering with production you should drop the index and
-  reissue the CREATE INDEX CONCURRENTLY command.
+  convenient to use REINDEX to rebuild them.
  
 
 
@@ -151,6 +148,21 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } 

 
+   
+CONCURRENTLY
+
+ 
+  When this option is used, PostgreSQL will rebuild the
+  index without taking any locks that prevent concurrent inserts,
+  updates, or deletes on the table; whereas a standard reindex build
+  locks out writes (but not reads) on the table until it's done.
+  There are several caveats to be aware of when using this option
+   see .
+ 
+
+   
+

 VERBOSE
 
@@ -231,6 +243,173 @@ REINDEX [ ( VERBOSE ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } 
 
+  
+   Rebuilding Indexes Concurrently
+
+   
+   index
+   rebuilding concurrently
+   
+
+   
+Rebuilding an index can interfere with regular operation of a database.
+Normally PostgreSQL locks the table whose index is rebuilt
+against writes and performs the entire index build with a single scan of the
+table. Other transactions can still read the table, but if they try to
+insert, update, or delete rows in the table they will block until the
+index rebuild is finished. This could have a severe effect if the system is
+a live production database. Very large tables can take many hours to be
+indexed, and even for smaller tables, an index rebuild can lock out writers
+for periods that are unacceptably long for a production system.
+   
+
+   
+PostgreSQL supports rebuilding indexes with minimum locking
+of writes.  This method is invoked by specifying the
+CONCURRENTLY option of REINDEX. When this option
+is used, PostgreSQL must perform two scans of the table
+for each index that needs to be rebuild and in addition it must wait for
+all existing transactions that could potentially use the index to
+terminate. This method requires more total work than a standard index
+rebuild and takes significantly longer to complete as it needs to wait
+for unfinished transactions that might modify the index. However, since
+it allows normal operations to continue while the index is rebuilt, this
+method is useful for rebuilding indexes in a production environment. Of
+course, the extra CPU, memory and I/O load imposed by the index rebuild
+may slow down other operations.
+   
+
+   
+The following steps occur in a concurrent index build, each in a separate
+transaction except when the new index definitions are created, where all
+the concurrent entries are created using only one transaction. Note that
+if there are multiple indexes to be rebuilt then each step loops through
+all the indexes we're rebuilding, using a separate transaction for each one.
+REINDEX CONCURRENTLY proceeds as follows when rebuilding
+indexes:
+
+
+ 
+  
+   A new temporary index definition is added into the catalog
+   pg_index. This definition will be used to replace the
+   old index. This step is done as a single transaction for all the indexes
+   involved in this process, meaning that if
+   REINDEX CONCURRENTLY is run on a table with multiple
+   indexes, all the catalog entries of the new indexes are created within a
+   single transaction. A SHARE UPDATE EXCLUSIVE lock at
+   session level is taken on the indexes being reindexed as well as its
+   parent table to prevent any schema modification while processing.
+  
+ 
+ 
+  
+   A first 

Re: [HACKERS] proposal: schema variables

2017-10-31 Thread Pavel Stehule
2017-10-31 22:28 GMT+01:00 srielau <se...@rielau.com>:

> Pavel,
>
> There is no
> DECLARE TEMP CURSOR
> or
> DECLARE TEMP variable in PLpgSQL
> and
>

sure .. DECLARE TEMP has no sense, I talked about similarity DECLARE and
CREATE TEMP


CREATE TEMP TABLE has a different meaning from what I understand you
> envision for variables.
>
> But maybe I'm mistaken. Your original post did not describe the entire
> syntax:
> CREATE [TEMP] VARIABLE [IF NOT EXISTS] name AS type
>   [ DEFAULT expression ] [[NOT] NULL]
>   [ ON TRANSACTION END { RESET | DROP } ]
>   [ { VOLATILE | STABLE } ];
>
> Especially the TEMP is not spelled out and how its presence affects or
> doesn't ON TRANSACTION END.
> So may be if you elaborate I understand where you are coming from.
>

TEMP has same functionality (and implementation) like our temp tables - so
at session end the temp variables are destroyed, but it can be assigned to
transaction.



>
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-hackers-
> f1928748.html
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] Re: Anyone have experience benchmarking very high effective_io_concurrency on NVME's?

2017-10-31 Thread Craig Ringer
On 1 November 2017 at 11:49, Andres Freund <and...@anarazel.de> wrote:

> Right. It'd probably be good to be a bit more adaptive here. But it's
> hard to do with posix_fadvise - we'd need an operation that actually
> notifies us of IO completion.  If we were using, say, asynchronous
> direct IO, we could initiate the request and regularly check how many
> blocks ahead of the current window are already completed and adjust the
> queue based on that, rather than jus tfiring off fadvises and hoping for
> the best.

In case it's of interest, I did some looking into using Linux's AIO
support in Pg a while ago, when chasing some issues around fsync
retries and handling of I/O errors.

It was a pretty serious dead end; it was clear that fsync support in
AIO is not only incomplete but inconsistent across kernel versions,
let alone other platforms.

But I see your name in the relevant threads, so you know that. To save
others the time, see:

* https://lwn.net/Articles/724198/
* https://lwn.net/Articles/671649/

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: PANIC: invalid index offnum: 186 when processing BRIN indexes in VACUUM

2017-10-31 Thread Alvaro Herrera
Tomas Vondra wrote:

> FWIW I can reproduce this on 9.5, and I don't even need to run the
> UPDATE part. That is, INSERT + VACUUM running concurrently is enough to
> produce broken BRIN indexes :-(

Hmm, I'm pretty sure we stress-tested brin in pretty much the same way.
But I see this misbehavior too.  Looking ...

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Anyone have experience benchmarking very high effective_io_concurrency on NVME's?

2017-10-31 Thread Andres Freund
Hi,

On 2017-10-31 18:47:06 +0100, Tomas Vondra wrote:
> On 10/31/2017 04:48 PM, Greg Stark wrote:
> > On 31 October 2017 at 07:05, Chris Travers <chris.trav...@adjust.com>
> wrote:
> >> Hi;
> >>
> >> After Andres's excellent talk at PGConf we tried benchmarking
> >> effective_io_concurrency on some of our servers and found that those
> which
> >> have a number of NVME storage volumes could not fill the I/O queue
> even at
> >> the maximum setting (1000).
> >
> > And was the system still i/o bound? If the cpu was 100% busy then
> > perhaps Postgres just can't keep up with the I/O system. It would
> > depend on workload though, if you start many very large sequential
> > scans you may be able to push the i/o system harder.
> >
> > Keep in mind effective_io_concurrency only really affects bitmap
> > index scans (and to a small degree index scans). It works by issuing
> > posix_fadvise() calls for upcoming buffers one by one. That gets
> > multiple spindles active but it's not really going to scale to many
> > thousands of prefetches (and effective_io_concurrency of 1000
> > actually means 7485 prefetches). At some point those i/o are going
> > to start completing before Postgres even has a chance to start
> > processing the data.

Note that even if they finish well before postgres gets around to
looking at the block, you might still be seeing benefits. SSDs benefit
from larger reads, and a deeper queue gives more chances for reordering
/ coalescing of requests. Won't beenefit the individual reader, but
might help the overall capacity of the system.


> Yeah, initiating the prefetches is not expensive, but it's not free
> either. So there's a trade-off between time spent on prefetching and
> processing the data.

Right. It'd probably be good to be a bit more adaptive here. But it's
hard to do with posix_fadvise - we'd need an operation that actually
notifies us of IO completion.  If we were using, say, asynchronous
direct IO, we could initiate the request and regularly check how many
blocks ahead of the current window are already completed and adjust the
queue based on that, rather than jus tfiring off fadvises and hoping for
the best.


> I believe this may be actually illustrated using Amdahl's law - the I/O
> is the parallel part, and processing the data is the serial part. And no
> matter what you do, the device only has so much bandwidth, which defines
> the maximum possible speedup (compared to "no prefetch" case).

Right.


> Furthermore, the device does not wait for all the I/O requests to be
> submitted - it won't wait for 1000 requests and then go "OMG! There's a
> lot of work to do!" It starts processing the requests as they arrive,
> and some of them will complete before you're done with submitting the
> rest, so you'll never see all the requests in the queue at once.

It'd be interesting to see how much it helps to scale the size of
readahead requests with the distance from the current read
iterator. E.g. if you're less than 16 blocks away from the current head,
issue size 1, up to 32 issue a 2 block request for consecutive blocks.
I suspect it won't help because at least linux's block layer / io
elevator seems quite successfully at merging. E.g. for the query:
EXPLAIN ANALYZE SELECT sum(l_quantity) FROM lineitem where l_receiptdate 
between '1993-05-03' and '1993-08-03';
on a tpc-h scale dataset on my laptop, I see:
Devicer/s w/s rMB/s wMB/s   rrqm/s   wrqm/s  %rrqm  
%wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda   25702.000.00495.27  0.00 37687.00 0.00  59.45   
0.005.130.00 132.0919.73 0.00   0.04 100.00

but it'd be worthwhile to see whether doing the merging ourselves allows
for deeper queues.


I think we really should start incorporating explicit prefetching in
more places. Ordered indexscans might actually be one case that's not
too hard to do in a simple manner - whenever at an inner node, prefetch
the leaf nodes below it. We obviously could do better, but that might be
a decent starting point to get some numbers.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: long transactions on hot standby feedback replica / proof of concept

2017-10-31 Thread Masahiko Sawada
On Tue, Oct 31, 2017 at 6:17 PM, Alexander Korotkov
<a.korot...@postgrespro.ru> wrote:
> On Tue, Oct 31, 2017 at 5:16 AM, Masahiko Sawada <sawada.m...@gmail.com>
> wrote:
>>
>> On Mon, Oct 30, 2017 at 10:16 PM, Robert Haas <robertmh...@gmail.com>
>> wrote:
>> > On Tue, Oct 24, 2017 at 1:26 PM, Ivan Kartyshov
>> > <i.kartys...@postgrespro.ru> wrote:
>> >> Hello. I made some bugfixes and rewrite the patch.
>> >
>> > I don't think it's a good idea to deliberately leave the state of the
>> > standby different from the state of the  master on the theory that it
>> > won't matter.  I feel like that's something that's likely to come back
>> > to bite us.
>>
>> I agree with Robert. What happen if we intentionally don't apply the
>> truncation WAL and switched over? If we insert a tuple on the new
>> master server to a block that has been truncated on the old master,
>> the WAL apply on the new standby will fail? I guess there are such
>> corner cases causing failures of WAL replay after switch-over.
>
>
> Yes, that looks dangerous.  One approach to cope that could be teaching heap
> redo function to handle such these situations.  But I expect this approach
> to be criticized for bad design.  And I understand fairness of this
> criticism.
>
> However, from user prospective of view, current behavior of
> hot_standby_feedback is just broken, because it both increases bloat and
> doesn't guarantee that read-only query on standby wouldn't be cancelled
> because of vacuum.  Therefore, we should be looking for solution: if one
> approach isn't good enough, then we should look for another approach.
>
> I can propose following alternative approach: teach read-only queries on hot
> standby to tolerate concurrent relation truncation.  Therefore, when
> non-existent heap page is accessed on hot standby, we can know that it was
> deleted by concurrent truncation and should be assumed to be empty.  Any
> thoughts?
>

You also meant that the applying WAL for AccessExclusiveLock is always
skipped on standby servers to allow scans to access the relation?

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dynamic result sets from procedures

2017-10-31 Thread Craig Ringer
On 1 November 2017 at 05:08, Peter Eisentraut
<peter.eisentr...@2ndquadrant.com> wrote:

> CREATE PROCEDURE pdrstest1()
> LANGUAGE SQL
> AS $$
> DECLARE c1 CURSOR WITH RETURN FOR SELECT * FROM cp_test2;
> DECLARE c2 CURSOR WITH RETURN FOR SELECT * FROM cp_test3;
> $$;
>
> CALL pdrstest1();

FWIW, this is similar to the model already used by PgJDBC to emulate
multiple result sets, though the current support in the driver is
rather crude. It detects a REFCURSOR in an output parameter / result
set and transparently FETCHes the result set, making it look to the
client app like it's a nested result set.

This shouldn't conflict with what you're doing because the driver does
not follow the JDBC standard behaviour of using
Statement.getMoreResults() and Statement.getResultSet() for multiple
result sets. That's currently only used by PgJDBC when fetching result
sets from batch query executions. Instead, the multiple result set
emulation requires the caller to 'getObject' the 'refcursor' field's
result-object, then cast it to ResultSet, and treat it as a new
(nested) result set.

True multiple result sets would be exposed in PgJDBC via getMoreResults().

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: enabling parallel execution for cursors explicitly (experimental)

2017-10-31 Thread Craig Ringer
> Now, I agree this is somewhat more limited than I hoped for, but OTOH it
> still solves the issue I initially aimed for (processing large query
> results in efficient way).

I don't quite understand this part.

We already send results to the client in a stream unless it's
something we have to materialize, in which case a cursor won't help
anyway.

If the client wants to fetch in chunks it can use a portal and limited
size fetches. That shouldn't (?) be parallel-unsafe, since nothing
else can happen in the middle anyway.

But in most cases the client can just fetch all, and let the socket
buffering take care of things, reading results only when it wants
them, and letting the server block when the windows are full.

That's not to say that SQL-level cursor support wouldn't be nice. I'm
just trying to better understand what it's solving.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another oddity in handling of WCO constraints in postgres_fdw

2017-10-31 Thread Robert Haas
On Wed, Oct 4, 2017 at 5:58 PM, Ashutosh Bapat
<ashutosh.ba...@enterprisedb.com> wrote:
> The view with WCO is local but the modification which violates WCO is
> being made on remote server by a trigger on remote table. Trying to
> control that doesn't seem to be a good idea, just like we can't
> control what rows get inserted on the foreign server when they violate
> local constraints.

I think that's a fair point.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)

2017-10-31 Thread Peter Geoghegan
On Tue, Oct 31, 2017 at 5:07 PM, Thomas Munro
<thomas.mu...@enterprisedb.com> wrote:
> So that's this bit:
>
> + pg_itoa(worker, filename);
> + lts->pfile = BufFileCreateShared(fileset, filename);
>
> ... and:
>
> + pg_itoa(i, filename);
> + file = BufFileOpenShared(fileset, filename);

Right.

> What's wrong with using a worker number like this?

I guess nothing, though there is the question of discoverability for
DBAs, etc. You do address this separately, by having (potentially)
descriptive filenames, as you go into.

> It's not random choice: buffile.c creates a uniquely named directory
> (or directories, if you have more than one location configured in the
> temp_tablespaces GUC) to hold all the backing files involved in each
> BufFileSet.  Naming of BufFiles within the BufFileSet is the caller's
> problem, and a worker number seems like a reasonable choice to me.  It
> won't collide with a concurrent parallel CREATE INDEX because that'll
> be using its own BufFileSet.

Oh, I see. I may have jumped the gun on that one.

> One complaint about the current coding that someone might object to:
> MakeSharedSegmentPath() just dumps the caller's BufFile name into a
> path without sanitisation: I should fix that so that we only accept
> fairly limited strings here.  Another complaint is that perhaps fd.c
> knows too much about buffile.c's business.  For example,
> RemovePgTempFilesInDir() knows about the ".set" directories created by
> buffile.c, which might be called a layering violation.  Perhaps the
> set/directory logic should move entirely into fd.c, so you'd call
> FileSetInit(FileSet *), not BufFileSetInit(BufFileSet *), and then
> BufFileOpenShared() would take a FileSet *, not a BufFileSet *.
> Thoughts?

I'm going to make an item on my personal TODO list for that. No useful
insights on that right now, though.

> 3.  sharedtuplestore.c takes a caller-supplied BufFileSet and creates
> its shared BufFiles in there.  Earlier versions created and owned a
> BufFileSet, but in the current Parallel Hash patch I create loads of
> separate SharedTuplestore objects but I didn't want to create load of
> directories to back them, so you can give them all the same
> BufFileSet.  That works because SharedTuplestores are also given a
> name, and it's the caller's job (in my case nodeHash.c) to make sure
> the SharedTuplestores are given unique names within the same
> BufFileSet.  For Parallel Hash you'll see names like 'i3of8' (inner
> batch 3 of 8).  There is no need for there to be in any sort of
> central registry for that though, because it rides on top of the
> guarantees from 2 above: buffile.c will put those files into a
> uniquely named directory, and that works as long as no one else is
> allowed to create files or directories in the temp directory that
> collide with its reserved pattern /^pgsql_tmp.+\.set$/.  For the same
> reason, parallel CREATE INDEX is free to use worker numbers as BufFile
> names, since it has its own BufFileSet to work within.

If the new standard is that you have temp file names that suggest the
purpose of each temp file, then that may be something that parallel
CREATE INDEX should buy into.

> In an earlier version, BufFileSet was one of those annoying data
> structures with a FLEXIBLE_ARRAY_MEMBER that you'd use as an
> incomplete type (declared but not defined in the includable header),
> and here it was being used "inside" (or rather after) SharedSort,
> which *itself* had a FLEXIBLE_ARRAY_MEMBER.  The reason for the
> variable sized object was that I needed all backends to agree on the
> set of temporary tablespace OIDs, of which there could be any number,
> but I also needed a 'flat' (pointer-free) object I could stick in
> relocatable shared memory.  In the newest version I changed that
> flexible array to tablespaces[8], because 8 should be enough
> tablespaces for anyone (TM).

I guess that that's something that you'll need to take up with Andres,
if you haven't already. I have a hard time imagining a single query
needed to use more than that many tablespaces at once, so maybe this
is fine.

> I don't really believe anyone uses
> temp_tablespaces for IO load balancing anymore and I hate code like
> the above.  So I think Rushabh should now remove the above-quoted code
> and just use a BufFileSet directly as a member of SharedSort.

FWIW, I agree with you that nobody uses temp_tablespaces this way
these days. This seems like a discussion for your hash join patch,
though. I'm happy to buy into that.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Adding column_constraint description in ALTER TABLE synopsis

2017-10-31 Thread Amit Langote
On 2017/10/31 21:31, Stephen Frost wrote:
> * Lætitia Avrot (laetitia.av...@gmail.com) wrote:
>> As Amit Langot pointed out, the column_constraint definition is missing
>> whereas it is used in ALTER TABLE synopsis. It can be easily found in the
>> CREATE TABLE synopsis, but it's not very user friendly.
>
> Thanks, this looks pretty reasonable, but did you happen to look for any
> other keywords in the ALTER TABLE that should really be in ALTER TABLE
> also?
> 
> I'm specifically looking at, at least, partition_bound_spec.  Maybe you
> could propose an updated patch which addresses that also, and any other
> cases you find?

Ah, yes.  I remember having left out partition_bound_spec simply because I
thought it was kind of how it was supposed to be done, seeing that neither
column_constraint and table_constraint were expanded in the ALTER TABLE's
synopsis.

It seems that there are indeed a couple of other things that need to be
brought over to ALTER TABLE synopsis including partition_bound_spec.
9f295c08f877 [1] added table_constraint, but missed to add the description
of index_parameters and exclude_element which are referenced therein.

Attached find updated version of the Lætitia's patch.

Thanks,
Amit

[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=9f295c
diff --git a/doc/src/sgml/ref/alter_table.sgml 
b/doc/src/sgml/ref/alter_table.sgml
index 41acda003f..e059f87875 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -85,6 +85,20 @@ ALTER TABLE [ IF EXISTS ] name
 OWNER TO { new_owner | 
CURRENT_USER | SESSION_USER }
 REPLICA IDENTITY { DEFAULT | USING INDEX index_name | FULL | NOTHING }
 
+and column_constraint 
is:
+
+[ CONSTRAINT constraint_name ]
+{ NOT NULL |
+  NULL |
+  CHECK ( expression ) [ NO 
INHERIT ] |
+  DEFAULT default_expr |
+  GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( 
sequence_options ) ] |
+  UNIQUE index_parameters |
+  PRIMARY KEY index_parameters |
+  REFERENCES reftable [ ( 
refcolumn ) ] [ MATCH FULL | MATCH 
PARTIAL | MATCH SIMPLE ]
+[ ON DELETE action ] [ ON 
UPDATE action ] }
+[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]
+
 and table_constraint 
is:
 
 [ CONSTRAINT constraint_name ]
@@ -96,6 +110,15 @@ ALTER TABLE [ IF EXISTS ] name
 [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE action ] [ ON UPDATE action ] }
 [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]
 
+index_parameters in 
UNIQUE, PRIMARY KEY, and 
EXCLUDE constraints are:
+
+[ WITH ( storage_parameter [= 
value] [, ... ] ) ]
+[ USING INDEX TABLESPACE tablespace_name ]
+
+exclude_element in an 
EXCLUDE constraint is:
+
+{ column_name | ( expression ) } [ opclass ] [ ASC | DESC ] [ NULLS { FIRST | LAST 
} ]
+
 and table_constraint_using_index is:
 
 [ CONSTRAINT constraint_name ]
@@ -104,6 +127,13 @@ ALTER TABLE [ IF EXISTS ] name
 
  
 
+and partition_bound_spec 
is:
+
+IN ( { numeric_literal | 
string_literal | NULL } [, ...] ) |
+FROM ( { numeric_literal | 
string_literal | MINVALUE | 
MAXVALUE } [, ...] )
+  TO ( { numeric_literal | 
string_literal | MINVALUE | 
MAXVALUE } [, ...] )
+
+
  
   Description
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Removing LEFT JOINs in more cases

2017-10-31 Thread David Rowley
Hackers,

Normally we'll only ever remove a LEFT JOIN relation if it's unused
and there's no possibility that the join would cause row duplication.
To check that the join wouldn't cause row duplicate we make use of
proofs, such as unique indexes, or for sub-queries, we make use of
DISTINCT and GROUP BY clauses.

There's another case that we don't handle, and it's VERY simple to test for.

Quite simply, it seems we could remove the join in cases such as:

create table t1 (id int primary key);
create table t2 (id int primary key, b int not null);

insert into t2 values(1,1),(2,1);
insert into t1 values(1);

select distinct t1.* from t1 left join t2 on t1.id=t2.b;

and

select t1.id from t1 left join t2 on t1.id=t2.b GROUP BY t1.id;

but not:

select t1.id,count(*) from t1 left join t2 on t1.id=t2.b GROUP BY t1.id;

In this case, the join *can* cause row duplicates, but the distinct or
group by would filter these out again anyway, so in these cases, we'd
not only get the benefit of not joining but also not having to remove
the duplicate rows caused by the join.

Given how simple the code is to support this, it seems to me to be
worth handling.

A patch to do this is attached.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


0001-Support-removing-LEFT-JOINs-with-DISTINCT-GROUP-BY.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] postgres.exe crashes with access violation on Windows while starting up

2017-10-31 Thread Tsunakawa, Takayuki
From: Michael Paquier [mailto:michael.paqu...@gmail.com]
> On Tue, Oct 31, 2017 at 6:59 AM, Tsunakawa, Takayuki
> <tsunakawa.ta...@jp.fujitsu.com> wrote:
> > When CurrentMemoryContext is NULL, the message  is logged with
> ReportEventA().  This is similar to write_console().
> 
> My point is that as Postgres is running as a service, isn't it wrong to
> write a message to stderr as a fallback if the memory context is not set?
> You would lose a message. It seems to me that for an operation that can
> happen at a low-level like the postmaster startup, we should really use
> a low-level operation as well.

I'm sorry I may not have been clear.  With this patch, write_eventlog() outputs 
the message to event log with ReportEventA() when CurrentMemoryContext is NULL. 
 It doesn't write to stderr.  So the message won't be lost.

Regards
Takayuki Tsunakawa



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)

2017-10-31 Thread Thomas Munro
e to be in any sort of
central registry for that though, because it rides on top of the
guarantees from 2 above: buffile.c will put those files into a
uniquely named directory, and that works as long as no one else is
allowed to create files or directories in the temp directory that
collide with its reserved pattern /^pgsql_tmp.+\.set$/.  For the same
reason, parallel CREATE INDEX is free to use worker numbers as BufFile
names, since it has its own BufFileSet to work within.

> * What's this all about?:
>
> + /* Accessor for the SharedBufFileSet that is at the end of Sharedsort. */
> + #define GetSharedBufFileSet(shared)\
> +   ((BufFileSet *) (&(shared)->tapes[(shared)->nTapes]))

In an earlier version, BufFileSet was one of those annoying data
structures with a FLEXIBLE_ARRAY_MEMBER that you'd use as an
incomplete type (declared but not defined in the includable header),
and here it was being used "inside" (or rather after) SharedSort,
which *itself* had a FLEXIBLE_ARRAY_MEMBER.  The reason for the
variable sized object was that I needed all backends to agree on the
set of temporary tablespace OIDs, of which there could be any number,
but I also needed a 'flat' (pointer-free) object I could stick in
relocatable shared memory.  In the newest version I changed that
flexible array to tablespaces[8], because 8 should be enough
tablespaces for anyone (TM).  I don't really believe anyone uses
temp_tablespaces for IO load balancing anymore and I hate code like
the above.  So I think Rushabh should now remove the above-quoted code
and just use a BufFileSet directly as a member of SharedSort.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Account for cost and selectivity of HAVING quals

2017-10-31 Thread Tom Lane
"David G. Johnston" <david.g.johns...@gmail.com> writes:
> On Tue, Oct 31, 2017 at 4:31 PM, Tels <nospam-pg-ab...@bloodgate.com> wrote:
>> That looks odd to me, it first uses output_tuples in a formula, then
>> overwrites the value with a new value. Should these lines be swapped?

> ​IIUC it is correct: the additional total_cost comes from processing every
> output group to check whether it is qualified - since every group is
> checked the incoming output_tuples from the prior grouping is used.

Right --- we'll expend the effort to compute the HAVING expression once
per group row, whether the row passes the qual or not.

    regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Account for cost and selectivity of HAVING quals

2017-10-31 Thread David G. Johnston
On Tue, Oct 31, 2017 at 4:31 PM, Tels  wrote:

>
> ​​
> That looks odd to me, it first uses output_tuples in a formula, then
> overwrites the value with a new value. Should these lines be swapped?
>

​IIUC it is correct: the additional total_cost comes from processing every
output group to check whether it is qualified - since every group is
checked the incoming output_tuples from the prior grouping is used.  The
side-effect of the effort is that the number of output_tuples has now been
reduced to only those matching the qual - and so it now must take on a new
value to represent this.

David J.​


[HACKERS] Proposal: generic WAL compression

2017-10-31 Thread Oleg Ivanov

Hackers,

a few years ago generic WAL was proposed by Alexander Korotkov 
(https://www.postgresql.org/message-id/flat/CAPpHfdsXwZmojm6Dx%2BTJnpYk27kT4o7Ri6X_4OSWcByu1Rm%2BVA%40mail.gmail.com#capphfdsxwzmojm6dx+tjnpyk27kt4o7ri6x_4oswcbyu1rm...@mail.gmail.com). 
and was committed into PostgreSQL 9.6.
One of the generic WAL advantages is the common interface for safe 
interaction with WAL for modules and extensions. The interface allows 
module to register the page, then change it, and then generic WAL 
generates and writes into xlog the algorithm of changing the old version 
of page into the new one. In the case of crash and recovery this 
algorithm may be applied.


Bloom and RUM indexes use generic WAL. The issue is that the generic 
algorithm of turning the old page into the new one is not optimal in the 
sense of record size in xlog. Now the main idea of the approach is to 
find all positions in the page where new version is different from the 
original one and write these changes into xlog. It works well if the 
page was rewritten completely or if only a few bytes have been changed. 
Nevertheless, this approach produces too large WAL record in the case of 
inserting or deleting a few bytes into the page. In this case there are 
almost no position where the old version and the new one are equal, so 
the record size is near the page size, but actual amount of changes in 
the page is small. This is exactly what happens often in RUM indexes.


In order to overcome that issue, I would like to propose the patch, 
which provides possibility to use another approach of the WAL record 
construction. If another approach fails to make a short enough record, 
it rolls back to the original approach. The main idea of another 
approach is not to perform bytewise comparison of pages, but finding the 
minimal editing distance between two pages and the corresponding editing 
algorithm. In the general case, finding editing distance takes O(N^2) 
time and memory. But there is also an algorithm which requires only 
O(ND) time and O(D^2) memory, where D is the editing distance between 
two sequences. Also for given D algorithm may show that the minimal 
editing distance between two sequences is more than D in the same amount 
of time and memory.


The special case of this algorithm which does not consider replace 
operations is described in the paper 
(http://www.xmailserver.org/diff2.pdf). The version of this algorithm 
which consumes O(ND) time and O(N) memory is used in diff console 
command, but for our purposes we don't need to increase the constant for 
the time in order to decrease memory complexity. For RUM indexes we 
usually have small enough editing distance (less than 64), so D^2 is not 
too much to store.


The results of experiments:

+--+++++
| Name |  WAL_diff(MB)  |  WAL_orig(MB) |  
Time_diff(s)  |  Time_orig(s)  |

+--+++++
| rum: make installcheck   | 38.9   | 82.5 | 4.37   
| 4.16   |

+--+++++
| bloom: make installcheck | 1.0| 1.0 | 0.66   | 
0.53   |

+--+++++
| test00.sql | 20.5   | 51.0 | 1.86   | 
1.41   |

+--+++++
| test01.sql   | 207.1  | 219.7   | 8.06 | 6.89   
|

+--+++++

We can see that the patch provides a little slowdown, but compresses 
generic WAL efficiently for RUM index. Also I'm going to do a few more 
experiments on this patch with another data.


The patch was tested on Lubuntu 14.04, but should not contain any 
platform-specific items. The patch, the files and scripts for doing the 
experiments and performance tests are attached.


Oleg Ivanov
Postgres Professional
The Russian PostgreSQL Company
diff --git a/src/backend/access/transam/generic_xlog.c b/src/backend/access/transam/generic_xlog.c
index 3adbf7b..cd0ed2a 100644
--- a/src/backend/access/transam/generic_xlog.c
+++ b/src/backend/access/transam/generic_xlog.c
@@ -43,9 +43,18 @@
  * a full page's worth of data.
  *-
  */
-#define FRAGMENT_HEADER_SIZE	(2 * sizeof(OffsetNumber))
+#define FRAGMENT_HEADER_SIZE	(2 * (sizeof(OffsetNumber) + \
+	  sizeof(char) + sizeof(int)))
 #define MATCH_THRESHOLD			FRAGMENT_HEADER_SIZE
-#define MAX_DELTA_SIZE			(BLCKSZ + 2 * FRAGMENT_HEADER_SIZE)
+#define MAX_DELTA_SIZE			(BLCKSZ + 2 * FRAGMENT_HEADER_SIZE) + sizeof(bool

Re: [HACKERS] Account for cost and selectivity of HAVING quals

2017-10-31 Thread Tels
Moin,

On Tue, October 31, 2017 5:45 pm, Tom Lane wrote:
> Pursuant to the discussion at
> https://www.postgresql.org/message-id/20171029112420.8920b5f...@mx.zeyos.com
> here's a patch to fix the planner so that eval costs and selectivity of
> HAVING quals are factored into the appropriate plan node numbers.
> Perhaps unsurprisingly, this doesn't change the results of any
> existing regression tests.
>
> It's slightly annoying that this approach will result in calculating
> the eval costs and selectivity several times, once for each aggregation
> plan type we consider.  I thought about inserting RestrictInfo nodes
> into the havingQual so that those numbers could be cached, but that turned
> out to break various code that doesn't expect to see such nodes there.
> I'm not sure it's worth going to the trouble of fixing that; in the big
> scheme of things, the redundant calculations likely don't cost much, since
> we aren't going to have relevant statistics.
>
> Comments?  If anyone wants to do a real review of this, I'm happy to stick
> it into the upcoming CF; but without an expression of interest, I'll just
> push it.  I don't think there's anything terribly controversial here.

Not a review, but the patch has this:


+ total_cost += qual_cost.startup + output_tuples *
qual_cost.per_tuple;
+
+ output_tuples = clamp_row_est(output_tuples *

...

That looks odd to me, it first uses output_tuples in a formula, then
overwrites the value with a new value. Should these lines be swapped?

Best regards,

Tels


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: PANIC: invalid index offnum: 186 when processing BRIN indexes in VACUUM

2017-10-31 Thread Tomas Vondra


On 10/31/2017 11:44 PM, Tomas Vondra wrote:
> ...
> Unfortunately, I think we still have a problem ... I've been wondering
> if we end up producing correct indexes, so I've done a simple test.
> 
> 1) create the table as before
> 
> 2) let the insert + vacuum run for some time, to see if there are
> crashes (result: no crashes after one hour, inserting ~92M rows)
> 
> 3) do a bunch of random updates on the data (while still doing the
> concurrent vacuum in another session)
> 
> 4) run a bunch of simple queries to compare the results, essentially
> 
>-- BRIN index
>SET enable_bitmapscan = on;
>SELECT COUNT(*) FROM brin_test WHERE a = $1;
> 
> 
>-- seq scan
>SET enable_bitmapscan = on;
>SELECT COUNT(*) FROM brin_test WHERE a = $1;
> 
> and unfortunately what I get is not particularly pleasant:
> 
> test=# set enable_bitmapscan = on;
> SET
> test=# select count(*) from brin_test where a = 0;
>  count
> ---
>   9062
> (1 row)
> 
> test=# set enable_bitmapscan = off;
> SET
> test=# select count(*) from brin_test where a = 0;
>  count
> ---
>   9175
> (1 row)
> 
> Attached is a SQL script with commands I used. You'll need to copy the
> commands into multiple psql sessions, though, to simulate concurrent
> activity).
> 

FWIW I can reproduce this on 9.5, and I don't even need to run the
UPDATE part. That is, INSERT + VACUUM running concurrently is enough to
produce broken BRIN indexes :-(


regards

-- 
Tomas Vondra      http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 10 parenthesized single-column updates can produce errors

2017-10-31 Thread Tom Lane
"David G. Johnston" <david.g.johns...@gmail.com> writes:
> ​Definitely moderates my opinion in my concurrent emai​l...though
> postponement is not strictly bad given the seeming frequency of the
> existing problematic syntax in the wild already.

Yeah, I'd hoped to get some capability extensions done here before v10
shipped, in line with the theory I've expressed in the past that it's
better if you can point to actual new features justifying a compatibility
break.  However, that didn't happen in time.

I'm disinclined to revert the change though; if there are people making
use of this oddity now, then the longer we leave it in place the more
people are going to be hurt when we do break it.

If I had a time machine, I'd go back and fix the original multi-column
SET patch so that it required the word ROW in all cases --- then at
least it'd have accepted a conformant subset of the standard syntax.
Oh well.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: schema variables

2017-10-31 Thread Gilles Darold
Le 31/10/2017 à 23:36, Gilles Darold a écrit :
> Le 31/10/2017 à 22:28, srielau a écrit :
>> Pavel,
>>
>> There is no
>> DECLARE TEMP CURSOR
>> or
>> DECLARE TEMP variable in PLpgSQL
>> and
>> CREATE TEMP TABLE has a different meaning from what I understand you
>> envision for variables.
>>
>> But maybe I'm mistaken. Your original post did not describe the entire
>> syntax:
>> CREATE [TEMP] VARIABLE [IF NOT EXISTS] name AS type 
>>   [ DEFAULT expression ] [[NOT] NULL]
>>   [ ON TRANSACTION END { RESET | DROP } ]
>>   [ { VOLATILE | STABLE } ];
>>
>> Especially the TEMP is not spelled out and how its presence affects or
>> doesn't ON TRANSACTION END.
>> So may be if you elaborate I understand where you are coming from.
> I think that the TEMP keyword can be removed. If I understand well the
> default scope for variable is the session, every transaction in a
> session will see the same value. For the transaction level, probably the
> reason of the TEMP keyword, I think the [ ON TRANSACTION END { RESET |
> DROP } ] will allow to restrict the scope to this transaction level
> without needing the TEMP keyword. When a variable is created in a
> transaction, it is temporary if "ON TRANSACTION END DROP" is used
> otherwise it will persist after the transaction end. I guess that this
> is the same as using TEMP keyword?

I forgot to say that in the last case the DECLARE statement can be used
so I don't see the reason of this kind of "temporary" variables.

Maybe the variable object like used in DB2 and defined in document :
https://www.ibm.com/support/knowledgecenter/en/SSEPEK_11.0.0/sqlref/src/tpc/db2z_sql_createvariable.html
could be enough to cover our needs.


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 10 parenthesized single-column updates can produce errors

2017-10-31 Thread David G. Johnston
On Tue, Oct 31, 2017 at 3:43 PM, Tom Lane  wrote:

> According to the spec, the elements of a parenthesized
> SET list should be assigned from the fields of a composite RHS.  If
> there's just one element of the SET list, the RHS should be a single-field
> composite value, and this syntax should result in extracting its field.
> This patch doesn't do that; it preserves our previous broken behavior,
> and thus just puts off the day of reckoning that inevitably will come.
>

​Definitely moderates my opinion in my concurrent emai​l...though
postponement is not strictly bad given the seeming frequency of the
existing problematic syntax in the wild already.

David J.


Re: [HACKERS] PostgreSQL 10 parenthesized single-column updates can produce errors

2017-10-31 Thread David G. Johnston
On Tue, Oct 31, 2017 at 3:14 PM, Rob McColl  wrote:

>
>> I believe that this is not an intended change or behavior, but is instead
>> an unintentional side effect of 906bfcad7ba7cb3863fe0e2a7810be8e3cd84fbd
>> Improve handling of "UPDATE ... SET (column_list) = row_constructor". (
>> https://github.com/postgres/postgres/commit/906bfcad7ba7cb3
>> 863fe0e2a7810be8e3cd84fbd).
>>
>
​At this point the intent of 906bfcad doesn't really matter - and given the
number of complaints in the month since ​v10 went live I'm tending to lean
toward bringing the pre-10 behavior back. There is no ambiguity involved
here and the breakage of existing applications seems considerably worse
than the technical oddity of allowing (val) to be interpreted as a
row_constructor in this situation.  From a standards perspective we are
strictly more permissive so no new problem there.

On a related note, the 10.0 syntax guide is wrong, it needs to break out
the parenthesized single-column and multi-column variants separately:

Presently: ( column_name [, ...] ) = [ ROW ] ( { expression | DEFAULT } [,
...] )

Basically one cannot specify only a single column_name AND omit ROW

It would have been nice if the syntax for that variant would have been:

( column_name, column_name [, ...] ) = [ ROW ] ( { expression | DEFAULT },
{ expression | DEFAULT } [, ...] )
​
If the v10 behavior remains the above change should be made as well as
adding:

( column_name ) = ROW ( expression | DEFAULT )

If we revert 10 to the pre-10 behavior the existing syntax will work.

David J.


Re: [HACKERS] Re: PANIC: invalid index offnum: 186 when processing BRIN indexes in VACUUM

2017-10-31 Thread Tomas Vondra
Hi,

On 10/31/2017 08:46 PM, Tom Lane wrote:
> I wrote:
>> maybe
>> we just have some run-of-the-mill bugs to find, like the off-the-end
>> bug I spotted in brin_doupdate.  There's apparently at least one
>> more, but given the error message it must be something like not
>> checking for a page to have turned into a revmap page.  Shouldn't
>> be too hard to find...
> 
> Actually, I think it might be as simple as the attached.
> brin_getinsertbuffer checks for the old page having turned into revmap,
> but the "samepage" path in brin_doupdate does not :-(
> 
> With this applied, Alvaro's version of the test case has survived
> without error for quite a bit longer than its former MTBF.  There
> might still be some issues though in other code paths.
> 

That does fix the crashes for me - I've been unable to reproduce any
even after one hour (it took a couple of minutes to crash before).

Unfortunately, I think we still have a problem ... I've been wondering
if we end up producing correct indexes, so I've done a simple test.

1) create the table as before

2) let the insert + vacuum run for some time, to see if there are
crashes (result: no crashes after one hour, inserting ~92M rows)

3) do a bunch of random updates on the data (while still doing the
concurrent vacuum in another session)

4) run a bunch of simple queries to compare the results, essentially

   -- BRIN index
   SET enable_bitmapscan = on;
   SELECT COUNT(*) FROM brin_test WHERE a = $1;


   -- seq scan
   SET enable_bitmapscan = on;
   SELECT COUNT(*) FROM brin_test WHERE a = $1;

and unfortunately what I get is not particularly pleasant:

test=# set enable_bitmapscan = on;
SET
test=# select count(*) from brin_test where a = 0;
 count
---
  9062
(1 row)

test=# set enable_bitmapscan = off;
SET
test=# select count(*) from brin_test where a = 0;
 count
---
  9175
(1 row)

Attached is a SQL script with commands I used. You'll need to copy the
commands into multiple psql sessions, though, to simulate concurrent
activity).

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


brin-test.sql
Description: application/sql

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 10 parenthesized single-column updates can produce errors

2017-10-31 Thread Tom Lane
Rob McColl <r...@robmccoll.com> writes:
> Attaching patch... :-/

The reason why hacking your way to a backwards-compatible solution is
a bad idea is that it breaks the SQL standard compliance we're trying to
achieve here.  According to the spec, the elements of a parenthesized
SET list should be assigned from the fields of a composite RHS.  If
there's just one element of the SET list, the RHS should be a single-field
composite value, and this syntax should result in extracting its field.
This patch doesn't do that; it preserves our previous broken behavior,
and thus just puts off the day of reckoning that inevitably will come.

As a concrete example, the spec says this should work:

create table t (f1 int);
update t set (f1) = row(4);

and it does in v10, but (without having tested) your patch will break it.
This is not so exciting for simple row constructors, where you could just
leave off the word ROW; but it is a critical distinction if we're ever to
get to the point of allowing other composite-returning constructs here,
for example composite-returning functions.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: schema variables

2017-10-31 Thread Gilles Darold
Le 31/10/2017 à 22:28, srielau a écrit :
> Pavel,
>
> There is no
> DECLARE TEMP CURSOR
> or
> DECLARE TEMP variable in PLpgSQL
> and
> CREATE TEMP TABLE has a different meaning from what I understand you
> envision for variables.
>
> But maybe I'm mistaken. Your original post did not describe the entire
> syntax:
> CREATE [TEMP] VARIABLE [IF NOT EXISTS] name AS type 
>   [ DEFAULT expression ] [[NOT] NULL]
>   [ ON TRANSACTION END { RESET | DROP } ]
>   [ { VOLATILE | STABLE } ];
>
> Especially the TEMP is not spelled out and how its presence affects or
> doesn't ON TRANSACTION END.
> So may be if you elaborate I understand where you are coming from.

I think that the TEMP keyword can be removed. If I understand well the
default scope for variable is the session, every transaction in a
session will see the same value. For the transaction level, probably the
reason of the TEMP keyword, I think the [ ON TRANSACTION END { RESET |
DROP } ] will allow to restrict the scope to this transaction level
without needing the TEMP keyword. When a variable is created in a
transaction, it is temporary if "ON TRANSACTION END DROP" is used
otherwise it will persist after the transaction end. I guess that this
is the same as using TEMP keyword?


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)

2017-10-31 Thread Peter Geoghegan
On Thu, Oct 26, 2017 at 4:22 AM, Rushabh Lathia
<rushabh.lat...@gmail.com> wrote:
> Attaching the re based patch according to the v22 parallel-hash patch sets

I took a quick look at this today, and noticed a few issues:

* make_name() is used to name files in sharedtuplestore.c, which is
what is passed to BufFileOpenShared() for parallel hash join. Your
using your own logic for that within the equivalent logtape.c call to
BufFileOpenShared(), presumably because make_name() wants to identify
participants by PID rather than by an ordinal identifier number.

I think that we need some kind of central registry for things that use
shared buffiles. It could be that sharedtuplestore.c is further
generalized to support this, or it could be that they both call
something else that takes care of naming. It's not okay to have this
left to random chance.

You're going to have to ask Thomas about this. You should also use
MAXPGPATH for the char buffer on the stack.

* This logtape.c comment needs to be updated, as it's no longer true:

 * successfully.  In general, workers can take it that the leader will
 * reclaim space in files under their ownership, and so should not
 * reread from tape.

* Robert hated the comment changes in the header of nbtsort.c. You
might want to change it back, because he is likely to be the one that
commits this.

* You should look for similar comments in tuplesort.c (IIRC a couple
of places will need to be revised).

* tuplesort_begin_common() should actively reject a randomAccess
parallel case using elog(ERROR).

* tuplesort.h should note that randomAccess isn't supported, too.

* What's this all about?:

+ /* Accessor for the SharedBufFileSet that is at the end of Sharedsort. */
+ #define GetSharedBufFileSet(shared)\
+   ((BufFileSet *) (&(shared)->tapes[(shared)->nTapes]))

You can't just cast from one type to the other without regard for the
underling size of the shared memory buffer, which is what this looks
like to me. This only fails to crash because you're only abusing the
last member in the tapes array for this purpose, and there happens to
be enough shared memory slop that you get away with it. I'm pretty
sure that ltsUnify() ends up clobbering the last/leader tape, which is
a place where BufFileSet is also used, so this is just wrong. You
should rethink the shmem structure a little bit.

* There is still that FIXME comment within leader_takeover_tapes(). I
believe that you should still have a leader tape (at least in local
memory in the leader), even though you'll never be able to do anything
with it, since randomAccess is no longer supported. You can remove the
FIXME, and just note that you have a leader tape to be consistent with
the serial case, though recognize that it's not useful. Note that even
with randomAccess, we always had the leader tape, so it's not that
different, really.

I suppose it might make sense to make shared->tapes not have a leader
tape. It hardly matters -- perhaps you should leave it there in order
to keep the code simple, as you'll be keeping the leader tape in local
memory, too. (But it still won't fly to continue to clobber it, of
course -- you still need to find a dedicated place for BufFileSet in
shared memory.)

That's all I have right now.
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Restricting pg_rewind to data/wal dirs

2017-10-31 Thread David Steele
On 10/30/17 6:36 AM, Michael Paquier wrote:
> On Mon, Oct 30, 2017 at 10:15 AM, Chris Travers
>>
>> How does rep mgr or other programs using pg_rewind know what to exclude?
> 
> Good question. Answers could come from folks such as David Steele
> (pgBackRest) or Marco (barman) whom I am attaching in CC.

pgBackRest does not use pg_rewind.  However, pgBackRest does use the
same exclusion list for backups as pg_basebackup.

-- 
-David
da...@pgmasters.net


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: enabling parallel execution for cursors explicitly (experimental)

2017-10-31 Thread Tomas Vondra
Hi,

On 10/20/2017 03:23 PM, Robert Haas wrote:
>
> ...
> 
> The main points I want to make clearly understood is the current
> design relies on (1) functions being labeled correctly and (2) other
> dangerous code paths being unreachable because there's nothing that
> runs between EnterParallelMode and ExitParallelMode which could invoke
> them, except by calling a mislabeled function.  Your patch expands the
> vulnerability surface from "executor code that can be reached without
> calling a mislabeled function" to "any code that can be reached by
> typing an SQL command".  Just rejecting any queries that are
> parallel-unsafe probably closes a good chunk of the holes, but that
> still leaves a lot of code that's never been run in parallel mode
> before potentially now running in parallel mode - e.g. any DDL command
> you happen to type, transaction control commands, code that only runs
> when the server is idle like idle_in_transaction_timeout, cursor
> operations.  A lot of that stuff is probably fine, but it's got to be
> thought through.  Error handling might be a problem, too: what happens
> if a parallel worker is killed while the query is suspended?  I
> suspect that doesn't work very nicely at all.
> 

OK, understood and thanks for explaining what may be the possible
issues. I do appreciate that.

I still think it'd be valuable to support this, though, so I'm going to
spend more time on investigating what needs to be handled.

But maybe there's a simpler option - what if we only allow fetches from
the PARALLEL cursor while the cursor is open? That is, this would work:

BEGIN;
...
DECLARE x PARALLEL CURSOR FOR SELECT * FROM t2 WHERE ...;
FETCH 1000 FROM x;
FETCH 1000 FROM x;
FETCH 1000 FROM x;
CLOSE x;
...
COMMIT;

but adding any other command between the OPEN/CLOSE commands would fail.
That should close all the holes with parallel-unsafe stuff, right?

Of course, this won't solve the issue with error handling / killing
suspended workers (which didn't occur to me before as a possible issue
at all, so that's for pointing that out). But that's a significantly
more limited issue to fix than all the parallel-unsafe bits.

Now, I agree this is somewhat more limited than I hoped for, but OTOH it
still solves the issue I initially aimed for (processing large query
results in efficient way).


regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 10 parenthesized single-column updates can produce errors

2017-10-31 Thread Rob McColl
e_test SET (c) = ('bungle') WHERE c = 'foo';
! SELECT * FROM update_test;
! UPDATE update_test SET (c,b,a) = ('bugle', b+11, DEFAULT) WHERE c = 'bungle';
  SELECT * FROM update_test;
  UPDATE update_test SET (c,b) = ('car', a+b), a = a + 1 WHERE a = 10;
  SELECT * FROM update_test;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Account for cost and selectivity of HAVING quals

2017-10-31 Thread Tom Lane
thnode->path.startup_cost = target->cost.startup;
  	pathnode->path.total_cost = target->cost.startup +
  		cpu_tuple_cost + target->cost.per_tuple;
+ 	/* Add cost of qual, if any --- but we ignore its selectivity */
  	if (resconstantqual)
  	{
  		QualCost	qual_cost;
*** create_unique_path(PlannerInfo *root, Re
*** 1596,1601 
--- 1597,1603 
  			cost_agg(_path, root,
  	 AGG_HASHED, NULL,
  	 numCols, pathnode->path.rows,
+ 	 NIL,
  	 subpath->startup_cost,
  	 subpath->total_cost,
  	 rel->rows);
*** create_group_path(PlannerInfo *root,
*** 2592,2597 
--- 2594,2600 
  	cost_group(>path, root,
  			   list_length(groupClause),
  			   numGroups,
+ 			   qual,
  			   subpath->startup_cost, subpath->total_cost,
  			   subpath->rows);
  
*** create_agg_path(PlannerInfo *root,
*** 2709,2714 
--- 2712,2718 
  	cost_agg(>path, root,
  			 aggstrategy, aggcosts,
  			 list_length(groupClause), numGroups,
+ 			 qual,
  			 subpath->startup_cost, subpath->total_cost,
  			 subpath->rows);
  
*** create_groupingsets_path(PlannerInfo *ro
*** 2817,2822 
--- 2821,2827 
  	 agg_costs,
  	 numGroupCols,
  	 rollup->numGroups,
+ 	 having_qual,
  	 subpath->startup_cost,
  	 subpath->total_cost,
  	 subpath->rows);
*** create_groupingsets_path(PlannerInfo *ro
*** 2840,2845 
--- 2845,2851 
  		 agg_costs,
  		 numGroupCols,
  		 rollup->numGroups,
+ 		 having_qual,
  		 0.0, 0.0,
  		 subpath->rows);
  if (!rollup->is_hashed)
*** create_groupingsets_path(PlannerInfo *ro
*** 2863,2868 
--- 2869,2875 
  		 agg_costs,
  		 numGroupCols,
  		 rollup->numGroups,
+ 		 having_qual,
  		 sort_path.startup_cost,
  		 sort_path.total_cost,
  		 sort_path.rows);
*** create_minmaxagg_path(PlannerInfo *root,
*** 2931,2936 
--- 2938,2952 
  	pathnode->path.startup_cost = initplan_cost + target->cost.startup;
  	pathnode->path.total_cost = initplan_cost + target->cost.startup +
  		target->cost.per_tuple + cpu_tuple_cost;
+ 	/* Add cost of qual, if any --- but we ignore its selectivity */
+ 	if (quals)
+ 	{
+ 		QualCost	qual_cost;
+ 
+ 		cost_qual_eval(_cost, quals, root);
+ 		pathnode->path.startup_cost += qual_cost.startup;
+ 		pathnode->path.total_cost += qual_cost.startup + qual_cost.per_tuple;
+ 	}
  
  	return pathnode;
  }
diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h
index 306d923..6c2317d 100644
*** a/src/include/optimizer/cost.h
--- b/src/include/optimizer/cost.h
*** extern void cost_material(Path *path,
*** 116,121 
--- 116,122 
  extern void cost_agg(Path *path, PlannerInfo *root,
  		 AggStrategy aggstrategy, const AggClauseCosts *aggcosts,
  		 int numGroupCols, double numGroups,
+ 		 List *quals,
  		 Cost input_startup_cost, Cost input_total_cost,
  		 double input_tuples);
  extern void cost_windowagg(Path *path, PlannerInfo *root,
*** extern void cost_windowagg(Path *path, P
*** 124,129 
--- 125,131 
  			   double input_tuples);
  extern void cost_group(Path *path, PlannerInfo *root,
  		   int numGroupCols, double numGroups,
+ 		   List *quals,
  		   Cost input_startup_cost, Cost input_total_cost,
  		   double input_tuples);
  extern void initial_cost_nestloop(PlannerInfo *root,

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: schema variables

2017-10-31 Thread srielau
Pavel,

There is no
DECLARE TEMP CURSOR
or
DECLARE TEMP variable in PLpgSQL
and
CREATE TEMP TABLE has a different meaning from what I understand you
envision for variables.

But maybe I'm mistaken. Your original post did not describe the entire
syntax:
CREATE [TEMP] VARIABLE [IF NOT EXISTS] name AS type 
  [ DEFAULT expression ] [[NOT] NULL]
  [ ON TRANSACTION END { RESET | DROP } ]
  [ { VOLATILE | STABLE } ];

Especially the TEMP is not spelled out and how its presence affects or
doesn't ON TRANSACTION END.
So may be if you elaborate I understand where you are coming from.

 



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: schema variables

2017-10-31 Thread Pavel Stehule
2017-10-31 22:08 GMT+01:00 Serge Rielau :

> Pavel,
>
> I can imagine, so DECLARE command will be introduced as short cut for
> CREATE TEMP VARIABLE, but in this moment I would not to open this topic. I
> afraid of bikeshedding and I hope so CREATE TEMP VAR is anough.
>
> Language is important because language stays.
> You choice of syntax will outlive your code and possibly yourself.
>

sure. But in this moment I don't see difference between DECLARE VARIABLE
and CREATE TEMP VARIABLE different than "TEMP" keyword.

Regards

Pavel


> My 2 cents
> Serge
>


Re: [HACKERS] GSoC 2017 : Patch for predicate locking in Gist index

2017-10-31 Thread Shubham Barai
On 9 October 2017 at 18:57, Alexander Korotkov <a.korot...@postgrespro.ru>
wrote:

> On Thu, Oct 5, 2017 at 9:48 PM, Shubham Barai <shubhambara...@gmail.com>
> wrote:
>
>> On 3 October 2017 at 00:32, Alexander Korotkov <a.korot...@postgrespro.ru
>> > wrote:
>>
>>> On Mon, Oct 2, 2017 at 9:11 PM, Andrew Borodin <amborodi...@gmail.com>
>>> wrote:
>>>
>>>> On Mon, Oct 2, 2017 at 8:00 PM, Alexander Korotkov
>>>> <a.korot...@postgrespro.ru> wrote:
>>>> > What happen if exactly this "continue" fires?
>>>> >
>>>> >> if (GistFollowRight(stack->page))
>>>> >> {
>>>> >> if (!xlocked)
>>>> >> {
>>>> >> LockBuffer(stack->buffer, GIST_UNLOCK);
>>>> >> LockBuffer(stack->buffer, GIST_EXCLUSIVE);
>>>> >> xlocked = true;
>>>> >> /* someone might've completed the split when we unlocked */
>>>> >> if (!GistFollowRight(stack->page))
>>>> >> continue;
>>>> >
>>>> >
>>>> > In this case we might get xlocked == true without calling
>>>> > CheckForSerializableConflictIn().
>>>> Indeed! I've overlooked it. I'm remembering this issue, we were
>>>> considering not fixing splits if in Serializable isolation, but
>>>> dropped the idea.
>>>>
>>>
>>> Yeah, current insert algorithm assumes that split must be fixed before
>>> we can correctly traverse the tree downwards.
>>>
>>>
>>>> CheckForSerializableConflictIn() must be after every exclusive lock.
>>>>
>>>
>>> I'm not sure, that fixing split is the case to necessary call
>>> CheckForSerializableConflictIn().  This lock on leaf page is not taken
>>> to do modification of the page.  It's just taken to ensure that nobody else
>>> is fixing this split the same this.  After fixing the split, it might
>>> appear that insert would go to another page.
>>>
>>> > I think it would be rather safe and easy for understanding to more
>>>> > CheckForSerializableConflictIn() directly before gistinserttuple().
>>>> The difference is that after lock we have conditions to change page,
>>>> and before gistinserttuple() we have actual intent to change page.
>>>>
>>>> From the point of future development first version is better (if some
>>>> new calls occasionally spawn in), but it has 3 calls while your
>>>> proposal have 2 calls.
>>>> It seems to me that CheckForSerializableConflictIn() before
>>>> gistinserttuple() is better as for now.
>>>>
>>>
>>> Agree.
>>>
>>
>> I have updated the location of  CheckForSerializableConflictIn()  and
>> changed the status of the patch to "needs review".
>>
>
> Now, ITSM that predicate locks and conflict checks are placed right for
> now.
> However, it would be good to add couple comments to gistdoinsert() whose
> would state why do we call CheckForSerializableConflictIn() in these
> particular places.
>
> I also take a look at isolation tests.  You made two separate test specs:
> one to verify that serialization failures do fire, and another to check
> there are no false positives.
> I wonder if we could merge this two test specs into one, but use more
> variety of statements with different keys for both inserts and selects.
>

Please find the updated version of patch here. I have made suggested
changes.

Regards,
Shubham


Predicate-locking-in-gist-index_4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Dynamic result sets from procedures

2017-10-31 Thread Peter Eisentraut
_OPT_PARALLEL_OK 0x0100  /* parallel mode OK */
+#define CURSOR_OPT_FAST_PLAN   (1 << 8)/* prefer fast-start plan */
+#define CURSOR_OPT_GENERIC_PLAN (1 << 9)   /* force use of generic plan */
+#define CURSOR_OPT_CUSTOM_PLAN (1 << 10)   /* force use of custom plan */
+#define CURSOR_OPT_PARALLEL_OK (1 << 11)   /* parallel mode OK */
 
 typedef struct DeclareCursorStmt
 {
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index a932400058..566546fe38 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -335,6 +335,7 @@ PG_KEYWORD("replica", REPLICA, UNRESERVED_KEYWORD)
 PG_KEYWORD("reset", RESET, UNRESERVED_KEYWORD)
 PG_KEYWORD("restart", RESTART, UNRESERVED_KEYWORD)
 PG_KEYWORD("restrict", RESTRICT, UNRESERVED_KEYWORD)
+PG_KEYWORD("return", RETURN, UNRESERVED_KEYWORD)
 PG_KEYWORD("returning", RETURNING, RESERVED_KEYWORD)
 PG_KEYWORD("returns", RETURNS, UNRESERVED_KEYWORD)
 PG_KEYWORD("revoke", REVOKE, UNRESERVED_KEYWORD)
diff --git a/src/include/utils/portal.h b/src/include/utils/portal.h
index cb6f00081d..08890e5c6a 100644
--- a/src/include/utils/portal.h
+++ b/src/include/utils/portal.h
@@ -130,6 +130,12 @@ typedef struct PortalData
SubTransactionId createSubid;   /* the creating subxact */
SubTransactionId activeSubid;   /* the last subxact with activity */
 
+   /*
+* Command ID where the portal was created.  Used for sorting returnable
+* cursors into creation order.
+*/
+   CommandId   createCid;
+
/* The query or queries the portal will execute */
const char *sourceText; /* text of query (as of 8.4, never 
NULL) */
const char *commandTag; /* command tag for original query */
@@ -237,5 +243,6 @@ extern PlannedStmt *PortalGetPrimaryStmt(Portal portal);
 extern void PortalCreateHoldStore(Portal portal);
 extern void PortalHashTableDeleteAll(void);
 extern bool ThereAreNoReadyPortals(void);
+extern List *GetReturnableCursors(void);
 
 #endif /* PORTAL_H */
diff --git a/src/test/regress/expected/create_procedure.out 
b/src/test/regress/expected/create_procedure.out
index eeb129d71f..219118cb16 100644
--- a/src/test/regress/expected/create_procedure.out
+++ b/src/test/regress/expected/create_procedure.out
@@ -79,8 +79,33 @@ ALTER ROUTINE testfunc1a RENAME TO testfunc1;
 ALTER ROUTINE ptest1(text) RENAME TO ptest1a;
 ALTER ROUTINE ptest1a RENAME TO ptest1;
 DROP ROUTINE testfunc1(int);
+-- dynamic result sets
+CREATE TABLE cp_test2 (a int);
+INSERT INTO cp_test2 VALUES (1), (2), (3);
+CREATE TABLE cp_test3 (x text, y text);
+INSERT INTO cp_test3 VALUES ('abc', 'def'), ('foo', 'bar');
+CREATE PROCEDURE pdrstest1()
+LANGUAGE SQL
+AS $$
+DECLARE c1 CURSOR WITH RETURN FOR SELECT * FROM cp_test2;
+DECLARE c2 CURSOR WITH RETURN FOR SELECT * FROM cp_test3;
+$$;
+CALL pdrstest1();
+ a 
+---
+ 1
+ 2
+ 3
+(3 rows)
+
+  x  |  y  
+-+-
+ abc | def
+ foo | bar
+(2 rows)
+
 -- cleanup
 DROP PROCEDURE ptest1;
 DROP PROCEDURE ptest2;
-DROP TABLE cp_test;
+DROP TABLE cp_test, cp_test2, cp_test3;
 DROP USER regress_user1;
diff --git a/src/test/regress/sql/create_procedure.sql 
b/src/test/regress/sql/create_procedure.sql
index f09ba2ad30..911882151c 100644
--- a/src/test/regress/sql/create_procedure.sql
+++ b/src/test/regress/sql/create_procedure.sql
@@ -69,11 +69,28 @@ CREATE USER regress_user1;
 DROP ROUTINE testfunc1(int);
 
 
+-- dynamic result sets
+
+CREATE TABLE cp_test2 (a int);
+INSERT INTO cp_test2 VALUES (1), (2), (3);
+CREATE TABLE cp_test3 (x text, y text);
+INSERT INTO cp_test3 VALUES ('abc', 'def'), ('foo', 'bar');
+
+CREATE PROCEDURE pdrstest1()
+LANGUAGE SQL
+AS $$
+DECLARE c1 CURSOR WITH RETURN FOR SELECT * FROM cp_test2;
+DECLARE c2 CURSOR WITH RETURN FOR SELECT * FROM cp_test3;
+$$;
+
+CALL pdrstest1();
+
+
 -- cleanup
 
 DROP PROCEDURE ptest1;
 DROP PROCEDURE ptest2;
 
-DROP TABLE cp_test;
+DROP TABLE cp_test, cp_test2, cp_test3;
 
 DROP USER regress_user1;
-- 
2.14.3


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: schema variables

2017-10-31 Thread Serge Rielau
Pavel, I can imagine, so DECLARE command will be introduced as short cut 
for CREATE TEMP VARIABLE, but in this moment I would not to open this 
topic. I afraid of bikeshedding and I hope so CREATE TEMP VAR is anough. 
Language is important because language stays. You choice of syntax will 
outlive your code and possibly yourself.

My 2 cents Serge

Re: [HACKERS] SQL procedures

2017-10-31 Thread Pavel Stehule
2017-10-31 18:23 GMT+01:00 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com>:

> I've been working on SQL procedures.  (Some might call them "stored
> procedures", but I'm not aware of any procedures that are not stored, so
> that's not a term that I'm using here.)
>
> Everything that follows is intended to align with the SQL standard, at
> least in spirit.
>
> This first patch does a bunch of preparation work.  It adds the
> CREATE/ALTER/DROP PROCEDURE commands and the CALL statement to call a
> procedure.  It also adds ROUTINE syntax which can refer to a function or
> procedure.  I have extended that to include aggregates.  And then there
> is a bunch of leg work, such as psql and pg_dump support.  The
> documentation is a lot of copy-and-paste right now; that can be
> revisited sometime.  The provided procedural languages (an ever more
> confusing term) each needed a small touch-up to handle pg_proc entries
> with prorettype == 0.
>
> Right now, there is no support for returning values from procedures via
> OUT parameters.  That will need some definitional pondering; and see
> also below for a possible alternative.
>
> With this, you can write procedures that are somewhat compatible with
> DB2, MySQL, and to a lesser extent Oracle.
>
> Separately, I will send patches that implement (the beginnings of) two
> separate features on top of this:
>
> - Transaction control in procedure bodies
>
> - Returning multiple result sets
>
> (In various previous discussions on "real stored procedures" or
> something like that, most people seemed to have one of these two
> features in mind.  I think that depends on what other SQL systems one
> has worked with previously.)
>

Not sure if disabling RETURN is good idea. I can imagine so optional
returning something like int status can be good idea. Cheaper than raising
a exception.

Regards

Pavel


> --
> Peter Eisentraut      http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


Re: [HACKERS] proposal: schema variables

2017-10-31 Thread Pavel Stehule
Hi

2017-10-30 22:42 GMT+01:00 srielau <se...@rielau.com>:

> Pavel,
>
> I wouldn't put in the DROP option.
> Or at least not in that form of syntax.
>
> By convention CREATE persists DDL and makes object definitions visible
> across sessions.
> DECLARE defines session private objects which cannot collide with other
> sessions.
>
> If you want variables with a short lifetime that get dropped at the end of
> the transaction that by definition would imply a session private object. So
> it ought to be DECLARE'd.
>
> As far as I can see PG has been following this practice so far.
>

I am thinking so there is little bit overlap between DECLARE and CREATE
TEMP VARIABLE command. With DECLARE command, you are usually has not any
control when variable will be destroyed. For CREATE TEMP  is DROP IF
EXISTS, but it should not be used.

It should be very similar to our current temporary tables, that are created
in session related temp schema.

I can imagine, so DECLARE command will be introduced as short cut for
CREATE TEMP VARIABLE, but in this moment I would not to open this topic. I
afraid of bikeshedding and I hope so CREATE TEMP VAR is anough.

Regards

Pavel


> Cheers
> Serge Rielau
> Salesforce.com
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-hackers-
> f1928748.html
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] SQL procedures

2017-10-31 Thread Pavel Stehule
2017-10-31 18:23 GMT+01:00 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com>:

> I've been working on SQL procedures.  (Some might call them "stored
> procedures", but I'm not aware of any procedures that are not stored, so
> that's not a term that I'm using here.)
>
> Everything that follows is intended to align with the SQL standard, at
> least in spirit.
>
> This first patch does a bunch of preparation work.  It adds the
> CREATE/ALTER/DROP PROCEDURE commands and the CALL statement to call a
> procedure.  It also adds ROUTINE syntax which can refer to a function or
> procedure.  I have extended that to include aggregates.  And then there
> is a bunch of leg work, such as psql and pg_dump support.  The
> documentation is a lot of copy-and-paste right now; that can be
> revisited sometime.  The provided procedural languages (an ever more
> confusing term) each needed a small touch-up to handle pg_proc entries
> with prorettype == 0.
>
> Right now, there is no support for returning values from procedures via
> OUT parameters.  That will need some definitional pondering; and see
> also below for a possible alternative.
>
> With this, you can write procedures that are somewhat compatible with
> DB2, MySQL, and to a lesser extent Oracle.
>
> Separately, I will send patches that implement (the beginnings of) two
> separate features on top of this:
>
> - Transaction control in procedure bodies
>
> - Returning multiple result sets
>
> (In various previous discussions on "real stored procedures" or
> something like that, most people seemed to have one of these two
> features in mind.  I think that depends on what other SQL systems one
> has worked with previously.)
>

great. I hope so I can help with testing

Regards

Pavel

>
> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


Re: [HACKERS] Re: PANIC: invalid index offnum: 186 when processing BRIN indexes in VACUUM

2017-10-31 Thread Tom Lane
I wrote:
> maybe
> we just have some run-of-the-mill bugs to find, like the off-the-end
> bug I spotted in brin_doupdate.  There's apparently at least one
> more, but given the error message it must be something like not
> checking for a page to have turned into a revmap page.  Shouldn't
> be too hard to find...

Actually, I think it might be as simple as the attached.
brin_getinsertbuffer checks for the old page having turned into revmap,
but the "samepage" path in brin_doupdate does not :-(

With this applied, Alvaro's version of the test case has survived
without error for quite a bit longer than its former MTBF.  There
might still be some issues though in other code paths.

regards, tom lane

diff --git a/src/backend/access/brin/brin_pageops.c b/src/backend/access/brin/brin_pageops.c
index 80f803e..b0f86f3 100644
*** a/src/backend/access/brin/brin_pageops.c
--- b/src/backend/access/brin/brin_pageops.c
*** brin_doupdate(Relation idxrel, BlockNumb
*** 113,121 
  
  	/*
  	 * Check that the old tuple wasn't updated concurrently: it might have
! 	 * moved someplace else entirely ...
  	 */
! 	if (!ItemIdIsNormal(oldlp))
  	{
  		LockBuffer(oldbuf, BUFFER_LOCK_UNLOCK);
  
--- 113,127 
  
  	/*
  	 * Check that the old tuple wasn't updated concurrently: it might have
! 	 * moved someplace else entirely, and for that matter the whole page
! 	 * might've become a revmap page.  Note that in the first two cases
! 	 * checked here, the "oldlp" we just calculated is garbage; but
! 	 * PageGetItemId() is simple enough that it was safe to do that
! 	 * calculation anyway.
  	 */
! 	if (!BRIN_IS_REGULAR_PAGE(oldpage) ||
! 		oldoff > PageGetMaxOffsetNumber(oldpage) ||
! 		!ItemIdIsNormal(oldlp))
  	{
  		LockBuffer(oldbuf, BUFFER_LOCK_UNLOCK);
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Transaction control in procedures

2017-10-31 Thread Peter Eisentraut
+
+   Py_RETURN_NONE;
+}
diff --git a/src/pl/plpython/sql/plpython_transaction.sql 
b/src/pl/plpython/sql/plpython_transaction.sql
new file mode 100644
index 00..42f191b008
--- /dev/null
+++ b/src/pl/plpython/sql/plpython_transaction.sql
@@ -0,0 +1,36 @@
+CREATE TABLE test1 (a int, b text);
+
+
+CREATE PROCEDURE transaction_test1()
+LANGUAGE plpythonu
+AS $$
+for i in range(0, 10):
+plpy.execute("INSERT INTO test1 (a) VALUES (%d)" % i)
+if i % 2 == 0:
+ plpy.commit()
+else:
+ plpy.rollback()
+$$;
+
+CALL transaction_test1();
+
+SELECT * FROM test1;
+
+
+TRUNCATE test1;
+
+DO
+LANGUAGE plpythonu
+$$
+for i in range(0, 10):
+plpy.execute("INSERT INTO test1 (a) VALUES (%d)" % i)
+if i % 2 == 0:
+ plpy.commit()
+else:
+ plpy.rollback()
+$$;
+
+SELECT * FROM test1;
+
+
+DROP TABLE test1;

base-commit: ee4673ac071f8352c41cc673299b7ec695f079ff
prerequisite-patch-id: 7b37b5c8905dcab64b9c6b8f98caf81049a569ec
-- 
2.14.3


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: PANIC: invalid index offnum: 186 when processing BRIN indexes in VACUUM

2017-10-31 Thread Tom Lane
Alvaro Herrera <alvhe...@alvh.no-ip.org> writes:
> Tom Lane wrote:
>> I really don't understand how any of this "let's release the buffer
>> lock and then take it back later" logic is supposed to work reliably.

> So summarize_range first inserts the placeholder tuple, which is there
> purposefully for other processes to update concurrently; then, without
> blocking any other process, scan the page range and update the
> placeholder tuple (this could take a long time, so it'd be a bad idea
> to hold buffer lock for that long).

Yeah, agreed, and your upthread point about avoiding deadlock when we
need to take two buffer locks at the same time is also something that
it's hard to see any other way around.  Maybe we just have to find a
way to make the existing structure work.  The sticking point is that
not only might the tuple we expected have been deleted, but someone
could have put an unrelated tuple in its place.  Are you confident
that you can detect that and recover from it?  If you're sure that
brin_tuples_equal() is trustworthy for this purpose, then maybe
we just have some run-of-the-mill bugs to find, like the off-the-end
bug I spotted in brin_doupdate.  There's apparently at least one
more, but given the error message it must be something like not
checking for a page to have turned into a revmap page.  Shouldn't
be too hard to find...

    regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL procedures

2017-10-31 Thread Tom Lane
Peter Eisentraut <peter.eisentr...@2ndquadrant.com> writes:
> I've been working on SQL procedures.

No comment yet on the big picture here, but ...

> The provided procedural languages (an ever more
> confusing term) each needed a small touch-up to handle pg_proc entries
> with prorettype == 0.

Putting 0 in prorettype seems like a pretty bad idea.  Why not use VOIDOID
for the prorettype value?  Or if there is some reason why "void" isn't the
right pseudotype, maybe you should invent a new one, analogous to the
"trigger" and "event_trigger" pseudotypes.

        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: PANIC: invalid index offnum: 186 when processing BRIN indexes in VACUUM

2017-10-31 Thread Alvaro Herrera
Tom Lane wrote:

> I really don't understand how any of this "let's release the buffer
> lock and then take it back later" logic is supposed to work reliably.

So summarize_range first inserts the placeholder tuple, which is there
purposefully for other processes to update concurrently; then, without
blocking any other process, scan the page range and update the
placeholder tuple (this could take a long time, so it'd be a bad idea
to hold buffer lock for that long).

I think what we should do is rethink the locking considerations in
brin_doupdate vs. brinGetTupleForHeapBlock, and how they are used in
summarize_range and brininsert.  In summarize_range, instead of hoping
that in some cases we will not need to re-obtain the placeholder tuple,
just do that in all cases keeping the buffer locked until the tuple is
updated.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove inbound links to sql-createuser

2017-10-31 Thread Stephen Frost
David,

* Stephen Frost (sfr...@snowman.net) wrote:
> * David G. Johnston (david.g.johns...@gmail.com) wrote:
> > Since CREATE USER is officially an alias for CREATE ROLE other parts of the
> > documentation should point to CREATE ROLE, not CREATE USER.  Most do but I
> > noticed when looking at CREATE DATABASE that it did not.  Further searching
> > turned up the usage in client-auth.sgml.  That one is questionable since we
> > are indeed talking about LOGIN roles there but we are already pointing to
> > sql-alterrole instead of sql-alteruser and so changing the create variation
> > to point to sql-createrole seems warranted.
> 
> +1.
> 
> Barring objections, I'll commit this in a bit.

Pushed to master, with a small bit of word-smithing in the CREATE ROLE
docs also.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Fix dumping pre-10 DBs by pg_dump10 if table "name" exists

2017-10-31 Thread Vitaly Burovoy
On 10/31/17, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Yeah, there are quite a few unqualified casts in pg_dump, but AFAICS
> all the rest are OK because the search_path is just pg_catalog.
>
> But I did find psql's describe.c making a similar mistake :-(.
> Pushed that along with your fix.
>
>   regards, tom lane
>

Oops. I missed it in "describe.c" because I grepped for exact "::name" string.

Thank you very much!

--
Best regards,
Vitaly Burovoy


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-10-31 Thread Peter Geoghegan
On Tue, Oct 31, 2017 at 2:25 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
> If there are challenges ahead, its reasonable to ask for test cases
> for that now especially if you think you know what they already are.
> Imagine we go forwards 2 months - if you dislike my patch when it
> exists you will submit a test case showing the fault. Why not save us
> all the trouble and describe that now? Test Driven Development.

I already have, on several occasions now. But if you're absolutely
insistent on my constructing the test case in terms of a real SQL
statement, then that's what I'll do.

Consider this MERGE statement, from your mock documentation:

MERGE INTO wines w
USING wine_stock_changes s
ON s.winename = w.winename
WHEN NOT MATCHED AND s.stock_delta > 0 THEN
  INSERT VALUES(s.winename, s.stock_delta)
WHEN MATCHED AND w.stock + s.stock_delta > 0 THEN
  UPDATE SET stock = w.stock + s.stock_delta
ELSE
  DELETE;

Suppose we remove the WHEN NOT MATCHED case, leaving us with:

MERGE INTO wines w
USING wine_stock_changes s
ON s.winename = w.winename
WHEN MATCHED AND w.stock + s.stock_delta > 0 THEN
  UPDATE SET stock = w.stock + s.stock_delta
ELSE
  DELETE;

We now have a MERGE that will not INSERT, but will continue to UPDATE
and DELETE. (It's implied that your syntax cannot do this at all,
because you propose use the ON CONFLICT infrastructure, but I think we
have to imagine a world in which that restriction either never existed
or has subsequently been lifted.)

The problem here is: Iff the first statement uses ON CONFLICT
infrastructure, doesn't the absence of WHEN NOT MATCHED imply
different semantics for the remaining updates and deletes in the
second version of the query? You've removed what seems like a neat
adjunct to the MERGE, but it actually changes everything else too when
using READ COMMITTED. Isn't that pretty surprising? If you're not
clear on what I mean, see my previous remarks on EPQ, live lock, and
what a CONFLICT could be in READ COMMITTED mode. Concurrent activity
at READ COMMITTED mode can be expected to significantly alter the
outcome here.

Why not just always use the behavior that the second query requires,
which is very much like an UPDATE FROM with an outer join that can
sometimes do deletes (and inserts)? We should always use an MVCC
snapshot, and never play ON CONFLICT style games with visibility/dirty
snapshots.

> It's difficult to discuss anything with someone that refuses to
> believe that there are acceptable ways around things. I believe there
> are.

Isn't that blind faith? Again, it seems like you haven't really tried
to convince me.

> If you can calm down the rhetoric we can work together, but if you
> continue to grandstand it makes it more difficult.

I'm trying to break the deadlock, by getting you to actually consider
what I'm saying. I don't enjoy confrontation. Currently, it seems like
you're just ignoring my objections, which you actually could fairly
easily work through. That is rather frustrating.

> You've said its possible another way. Show that assertion is actually
> true. We're all listening, me especially, for the technical details.

My proposal, if you want to call it that, has the merit of actually
being how MERGE works in every other system. Both Robert and Stephen
seem to be almost sold on what I suggest, so I think that I've
probably already explained my position quite well.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Anyone have experience benchmarking very high effective_io_concurrency on NVME's?

2017-10-31 Thread Tomas Vondra
Hi,

On 10/31/2017 04:48 PM, Greg Stark wrote:
> On 31 October 2017 at 07:05, Chris Travers <chris.trav...@adjust.com>
wrote:
>> Hi;
>>
>> After Andres's excellent talk at PGConf we tried benchmarking
>> effective_io_concurrency on some of our servers and found that those
which
>> have a number of NVME storage volumes could not fill the I/O queue
even at
>> the maximum setting (1000).
>
> And was the system still i/o bound? If the cpu was 100% busy then
> perhaps Postgres just can't keep up with the I/O system. It would
> depend on workload though, if you start many very large sequential
> scans you may be able to push the i/o system harder.
>
> Keep in mind effective_io_concurrency only really affects bitmap
> index scans (and to a small degree index scans). It works by issuing
> posix_fadvise() calls for upcoming buffers one by one. That gets
> multiple spindles active but it's not really going to scale to many
> thousands of prefetches (and effective_io_concurrency of 1000
> actually means 7485 prefetches). At some point those i/o are going
> to start completing before Postgres even has a chance to start
> processing the data.
>
Yeah, initiating the prefetches is not expensive, but it's not free
either. So there's a trade-off between time spent on prefetching and
processing the data.

I believe this may be actually illustrated using Amdahl's law - the I/O
is the parallel part, and processing the data is the serial part. And no
matter what you do, the device only has so much bandwidth, which defines
the maximum possible speedup (compared to "no prefetch" case).

Furthermore, the device does not wait for all the I/O requests to be
submitted - it won't wait for 1000 requests and then go "OMG! There's a
lot of work to do!" It starts processing the requests as they arrive,
and some of them will complete before you're done with submitting the
rest, so you'll never see all the requests in the queue at once.

And of course, iostat and other tools only give you "average queue
length", which is mostly determined by the average throughput.

In my experience (on all types of storage, including SSDs and NVMe), the
performance quickly and significantly improves once you start increasing
the value (say, to 8 or 16, maybe 64). And then the gains become much
more modest - not because the device could not handle more, but because
of the prefetch/processing ratio reached the optimal value.

But all this is actually per-process, if you can run multiple backends
(particularly when doing bitmap index scans), I'm sure you'll see the
queues being more full.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Statement-level rollback

2017-10-31 Thread MauMau
From: Simon Riggs
On 14 August 2017 at 23:58, Peter Eisentraut
<peter.eisentr...@2ndquadrant.com> wrote:
> On 2/28/17 02:39, Tsunakawa, Takayuki wrote:
>> The code for stored functions is not written yet, but I'd like your
feedback for the specification and design based on the current patch.
I'll add this patch to CommitFest 2017-3.
>
> This patch needs to be rebased for the upcoming commit fest.

I'm willing to review this if the patch is going to be actively worked
on.


I'm very sorry I couldn't reply to your kind offer.  I rebased the
patch and will add it to CF 2017/11.  I hope I will complete the patch
in this CF.

Regards
Takayuki Tsunakawa




stmt_rollback_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL procedures

2017-10-31 Thread Simon Riggs
On 31 October 2017 at 18:23, Peter Eisentraut
<peter.eisentr...@2ndquadrant.com> wrote:

> I've been working on SQL procedures.  (Some might call them "stored
> procedures", but I'm not aware of any procedures that are not stored, so
> that's not a term that I'm using here.)

I guess that the DO command might have a variant to allow you to
execute a procedure that isn't stored?

Not suggesting you implement that, just thinking about why/when the
"stored" word would be appropriate.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix dumping pre-10 DBs by pg_dump10 if table "name" exists

2017-10-31 Thread Tom Lane
Vitaly Burovoy <vitaly.buro...@gmail.com> writes:
> I left an other "NULL::name AS rolname" at
> src/bin/pg_dump/pg_dump.c:2978 because can't check (remoteVersion <
> 9) it and it is under strict "selectSourceSchema(fout,
> "pg_catalog");" schema set.

Yeah, there are quite a few unqualified casts in pg_dump, but AFAICS
all the rest are OK because the search_path is just pg_catalog.

But I did find psql's describe.c making a similar mistake :-(.
Pushed that along with your fix.

    regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: PANIC: invalid index offnum: 186 when processing BRIN indexes in VACUUM

2017-10-31 Thread Alvaro Herrera
Tom Lane wrote:
> So in a few more runs this morning using Alvaro's simplified test case,
> I have seen the following behaviors not previously reported:

> 1. Crashes in PageIndexTupleOverwrite, which has the same "invalid index
> offnum: %u" error report as PageIndexTupleDeleteNoCompact.  I note the
> same message appears in plain PageIndexTupleDelete as well.
> I think we've been too hasty to assume all instances of this came out of
> PageIndexTupleDeleteNoCompact.

Ah, I wasn't paying close attention to the originator routine of the
message, but you're right, I see this one too.

> 2. Crashes in the data-insertion process, not only the process running
> summarize_range:

Yeah, I saw these.  I was expecting it, since the two routines
(brininsert and summarize_range) pretty much share the insertion
protocol.

> I really don't understand how any of this "let's release the buffer
> lock and then take it back later" logic is supposed to work reliably.

Yeah, evidently that was way too optimistic and I'll need to figure out
a better mechanism to handle this.

The intention was to avoid deadlocks while locking the target page for
the insertion: by having both pages start unlocked we can simply lock
them in block number order.  If we keep the page containing the tuple
locked, I don't see how to reliably avoid a deadlock while acquiring a
buffer to insert the new tuple.

> BTW, while I'm bitching, it seems fairly insane from a concurrency
> standpoint that brin_getinsertbuffer is calling RecordPageWithFreeSpace
> while holding at least one and possibly two buffer locks.  Shouldn't
> that be done someplace else?

Hmm.  I spent a lot of effort (commit ccc4c074994d) to avoid leaving
pages uninitialized / unrecorded in FSM.  I left this on purpose on the
rationale that trying to fix it would make the callsites more convoluted
(the retry logic doesn't help).  But as I recall this was supposed to be
done only in the rare case where the buffer could not be returned to
caller ... but that's not what the current code does, so there is
something wrong there.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove secondary checkpoint

2017-10-31 Thread Michael Paquier
On Tue, Oct 31, 2017 at 2:00 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
> On 30 October 2017 at 18:58, Simon Riggs <si...@2ndquadrant.com> wrote:
>> On 30 October 2017 at 15:22, Simon Riggs <si...@2ndquadrant.com> wrote:
>>
>>>> You forgot to update this formula in xlog.c:
>>>> distance = (2.0 + CheckPointCompletionTarget) * 
>>>> CheckPointDistanceEstimate;
>>>> /* add 10% for good measure. */
>>>> distance *= 1.10;
>>>> You can guess easily where the mistake is.
>>>
>>> Doh - fixed that before posting, so I must have sent the wrong patch.
>>>
>>> It's the 10%, right? ;-)
>>
>> So, there are 2 locations that mention 2.0 in xlog.c
>>
>> I had already fixed one, which is why I remembered editing it.
>>
>> The other one you mention has a detailed comment above it explaining
>> why it should be 2.0 rather than 1.0, so I left it.
>>
>> Let me know if you still think it should be removed? I can see the
>> argument both ways.
>> Or maybe we need another patch to account for manual checkpoints.
>
> Revised patch to implement this

Here is the comment and the formula:
 * The reason this calculation is done from the prior
checkpoint, not the
 * one that just finished, is that this behaves better if some
checkpoint
 * cycles are abnormally short, like if you perform a manual checkpoint
 * right after a timed one. The manual checkpoint will make
almost a full
 * cycle's worth of WAL segments available for recycling, because the
 * segments from the prior's prior, fully-sized checkpoint cycle are no
 * longer needed. However, the next checkpoint will make only
few segments
 * available for recycling, the ones generated between the timed
 * checkpoint and the manual one right after that. If at the manual
 * checkpoint we only retained enough segments to get us to
the next timed
 * one, and removed the rest, then at the next checkpoint we would not
 * have enough segments around for recycling, to get us to the
checkpoint
 * after that. Basing the calculations on the distance from
the prior redo
 * pointer largely fixes that problem.
 */
distance = (2.0 + CheckPointCompletionTarget) *
CheckPointDistanceEstimate;

While the mention about a manual checkpoint happening after a timed
one will cause a full range of WAL segments to be recycled, it is not
actually true that segments of the prior's prior checkpoint are not
needed, because with your patch the segments of the prior checkpoint
are getting recycled. So it seems to me that based on that the formula
ought to use 1.0 instead of 2.0...
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: PANIC: invalid index offnum: 186 when processing BRIN indexes in VACUUM

2017-10-31 Thread Tom Lane
lling RecordPageWithFreeSpace
while holding at least one and possibly two buffer locks.  Shouldn't
that be done someplace else?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix dumping pre-10 DBs by pg_dump10 if table "name" exists

2017-10-31 Thread Vitaly Burovoy
On 10/31/17, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Vitaly Burovoy <vitaly.buro...@gmail.com> writes:
>> Recently my colleagues found a bug.
>
>> -  "SELECT 'bigint'::name AS 
>> sequence_type, "
>> +  "SELECT 
>> 'bigint'::pg_catalog.name AS sequence_type,
>
> Good catch, but I think we could simplify this by just omitting the cast
> altogether:
>
> -   "SELECT 'bigint'::name AS 
> sequence_type, "
> +   "SELECT 'bigint' AS 
> sequence_type,
>
> pg_dump doesn't particularly care whether the column comes back marked
> as 'name' or 'text' or 'unknown'.
>
>   regards, tom lane

OK, just for convenience I'm attaching your version of the fix.
I left an other "NULL::name AS rolname" at
src/bin/pg_dump/pg_dump.c:2978 because can't check (remoteVersion <
9) it and it is under strict "selectSourceSchema(fout,
"pg_catalog");" schema set.

--
Best regards,
Vitaly Burovoy


0001-Fix-dumping-schema-if-a-table-named-name-exists.ver2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Anyone have experience benchmarking very high effective_io_concurrency on NVME's?

2017-10-31 Thread Greg Stark
On 31 October 2017 at 07:05, Chris Travers <chris.trav...@adjust.com> wrote:
> Hi;
>
> After Andres's excellent talk at PGConf we tried benchmarking
> effective_io_concurrency on some of our servers and found that those which
> have a number of NVME storage volumes could not fill the I/O queue even at
> the maximum setting (1000).

And was the system still i/o bound? If the cpu was 100% busy then
perhaps Postgres just can't keep up with the I/O system. It would
depend on workload though, if you start many very large sequential
scans you may be able to push the i/o system harder.

Keep in mind effective_io_concurrency only really affects bitmap index
scans (and to a small degree index scans). It works by issuing
posix_fadvise() calls for upcoming buffers one by one. That gets
multiple spindles active but it's not really going to scale to many
thousands of prefetches (and effective_io_concurrency of 1000 actually
means 7485 prefetches). At some point those i/o are going to start
completing before Postgres even has a chance to start processing the
data.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] postgres_fdw: Add support for INSERT OVERRIDING clause

2017-10-31 Thread Peter Eisentraut
It has been pointed out to me that the command deparsing in postgres_fdw
does not support the INSERT OVERRIDING clause that was added in PG10.
Here is a patch that seems to fix that.  I don't know much about this,
whether anything else needs to be added or whether there should be
tests.  Perhaps someone more familiar with postgres_fdw details can
check it.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From 097ecf61a9c2740b02a21be19a92aed92388346a Mon Sep 17 00:00:00 2001
From: Peter Eisentraut <pete...@gmx.net>
Date: Tue, 31 Oct 2017 11:44:14 -0400
Subject: [PATCH] postgres_fdw: Add support for INSERT OVERRIDING clause

---
 contrib/postgres_fdw/deparse.c | 16 +++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/contrib/postgres_fdw/deparse.c b/contrib/postgres_fdw/deparse.c
index 2af8364010..8eb227605f 100644
--- a/contrib/postgres_fdw/deparse.c
+++ b/contrib/postgres_fdw/deparse.c
@@ -1573,7 +1573,21 @@ deparseInsertSql(StringInfo buf, PlannerInfo *root,
deparseColumnRef(buf, rtindex, attnum, root, false);
}
 
-   appendStringInfoString(buf, ") VALUES (");
+   appendStringInfoString(buf, ") ");
+
+   switch (root->parse->override)
+   {
+   case OVERRIDING_USER_VALUE:
+   appendStringInfoString(buf, "OVERRIDING USER 
VALUE ");
+   break;
+   case OVERRIDING_SYSTEM_VALUE:
+   appendStringInfoString(buf, "OVERRIDING SYSTEM 
VALUE ");
+   break;
+   default:
+   break;
+   }
+
+   appendStringInfoString(buf, "VALUES (");
 
pindex = 1;
    first = true;
-- 
2.14.3


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Consistently catch errors from Python _New() functions

2017-10-31 Thread Peter Eisentraut
  Form_pg_attribute attr = TupleDescAttr(ob->tupdesc, i);
@@ -171,6 +178,8 @@ PLy_result_coltypes(PyObject *self, PyObject *unused)
}
 
list = PyList_New(ob->tupdesc->natts);
+   if (!list)
+   return NULL;
for (i = 0; i < ob->tupdesc->natts; i++)
{
Form_pg_attribute attr = TupleDescAttr(ob->tupdesc, i);
@@ -195,6 +204,8 @@ PLy_result_coltypmods(PyObject *self, PyObject *unused)
}
 
list = PyList_New(ob->tupdesc->natts);
+   if (!list)
+   return NULL;
for (i = 0; i < ob->tupdesc->natts; i++)
{
Form_pg_attribute attr = TupleDescAttr(ob->tupdesc, i);
diff --git a/src/pl/plpython/plpy_spi.c b/src/pl/plpython/plpy_spi.c
index 955769c5e3..cad1324205 100644
--- a/src/pl/plpython/plpy_spi.c
+++ b/src/pl/plpython/plpy_spi.c
@@ -389,6 +389,8 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, 
uint64 rows, int status)
volatile MemoryContext oldcontext;
 
result = (PLyResultObject *) PLy_result_new();
+   if (!result)
+   return NULL;
Py_DECREF(result->status);
result->status = PyInt_FromLong(status);
 
@@ -435,15 +437,22 @@ PLy_spi_execute_fetch_result(SPITupleTable *tuptable, 
uint64 rows, int status)
 
Py_DECREF(result->rows);
result->rows = PyList_New(rows);
-
-   PLy_input_tuple_funcs(, tuptable->tupdesc);
-   for (i = 0; i < rows; i++)
+   if (!result->rows)
{
-   PyObject   *row = 
PLyDict_FromTuple(,
-   
tuptable->vals[i],
-   
tuptable->tupdesc);
-
-   PyList_SetItem(result->rows, i, row);
+   Py_DECREF(result);
+   result = NULL;
+   }
+   else
+   {
+   PLy_input_tuple_funcs(, 
tuptable->tupdesc);
+   for (i = 0; i < rows; i++)
+   {
+   PyObject   *row = 
PLyDict_FromTuple(,
+   
tuptable->vals[i],
+   
tuptable->tupdesc);
+
+   PyList_SetItem(result->rows, i, 
row);
+   }
}
}
 
diff --git a/src/pl/plpython/plpy_typeio.c b/src/pl/plpython/plpy_typeio.c
index e4af8cc9ef..569eb5862e 100644
--- a/src/pl/plpython/plpy_typeio.c
+++ b/src/pl/plpython/plpy_typeio.c
@@ -289,7 +289,7 @@ PLyDict_FromTuple(PLyTypeInfo *info, HeapTuple tuple, 
TupleDesc desc)
 
dict = PyDict_New();
if (dict == NULL)
-   PLy_elog(ERROR, "could not create new dictionary");
+       return NULL;
 
PG_TRY();
{
@@ -675,6 +675,8 @@ PLyList_FromArray_recurse(PLyDatumToOb *elm, int *dims, int 
ndim, int dim,
PyObject   *list;
 
list = PyList_New(dims[dim]);
+   if (!list)
+   return NULL;
 
if (dim < ndim - 1)
{
-- 
2.14.3


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Generic type subscripting

2017-10-31 Thread Arthur Zakirov
On Sun, Oct 29, 2017 at 10:56:19PM +0100, Dmitry Dolgov wrote:
> 
> So, here is the new version of patch that contains modifications we've
> discussed, namely:
> 
> * store oids of `parse`, `fetch` and `assign` functions
> 
> * introduce dependencies from a data type
> 
> * as a side effect of previous two I also eliminated some unnecessary
> arguments
>   in `parse` function.

Thank you for new version of the patch.

There are some notes.

Documentation
-

Documentation is compiled. But there are warnings about end-tags. Now it is 
necessary to have full named end-tags:

=# make -C doc/src/sgml check
/usr/sbin/onsgmls:json.sgml:574:20:W: empty end-tag
...

Documentation is out of date:
- catalogs.sgml needs to add information about additional pg_type fields
- create_type.sgml needs information about subscripting_parse, 
subscripting_assign and subscripting_fetch options
- xsubscripting.sgml is out of date

Code


I think it is necessary to check Oids of subscripting_parse, 
subscripting_assign, subscripting_fetch. Maybe within TypeCreate().

Otherwise next cases possible:

=# CREATE TYPE custom (
   internallength = 8,
   input = custom_in,
   output = custom_out,
   subscripting_parse = custom_subscripting_parse);
=# CREATE TYPE custom (
   internallength = 8,
   input = custom_in,
   output = custom_out,
   subscripting_fetch = custom_subscripting_fetch);

Are all subscripting_* fields mandatory? If so if user provided at least one of 
them then all fields should be provided.

Should all types have support assigning via subscript? If not then 
subscripting_assign parameter is optional.

> +Datum
> +jsonb_subscript_parse(PG_FUNCTION_ARGS)
> +{
> + boolisAssignment = PG_GETARG_BOOL(0);

and

> +Datum
> +custom_subscripting_parse(PG_FUNCTION_ARGS)
> +{
> + boolisAssignment = PG_GETARG_BOOL(0);

Here isAssignment is unused variable, so it could be removed.

> +
> + scratch->d.sbsref.eval_finfo = eval_finfo;
> + scratch->d.sbsref.nested_finfo = nested_finfo;
> +

As I mentioned earlier we need assigning eval_finfo and nested_finfo only for 
EEOP_SBSREF_OLD, EEOP_SBSREF_ASSIGN and EEOP_SBSREF_FETCH steps.
Also they should be assigned before calling ExprEvalPushStep(), not after. 
Otherwise some bugs may appear in future.

> - ArrayRef   *aref = makeNode(ArrayRef);
> + NodeTag sbstag = nodeTag(src_expr);
> + Size nodeSize = sizeof(SubscriptingRef);
> + SubscriptingRef *sbsref = (SubscriptingRef *) newNode(nodeSize, 
> sbstag);

Is there necessity to use newNode() instead using makeNode(). The previous code 
was shorter.

There is no changes in execnodes.h except removed line. So I think execnodes.h 
could be removed from the patch.

> 
> I'm going to make few more improvements, but in the meantime I hope we can
> continue to review the patch.

I will wait.

-- 
Arthur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix dumping pre-10 DBs by pg_dump10 if table "name" exists

2017-10-31 Thread Tom Lane
Vitaly Burovoy <vitaly.buro...@gmail.com> writes:
> Recently my colleagues found a bug.

> -   "SELECT 'bigint'::name AS 
> sequence_type, "
> +   "SELECT 
> 'bigint'::pg_catalog.name AS sequence_type, 

Good catch, but I think we could simplify this by just omitting the cast
altogether:

- "SELECT 'bigint'::name AS 
sequence_type, "
+ "SELECT 'bigint' AS 
sequence_type, 

pg_dump doesn't particularly care whether the column comes back marked
as 'name' or 'text' or 'unknown'.

    regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add some const decorations to prototypes

2017-10-31 Thread Tom Lane
Peter Eisentraut <peter.eisentr...@2ndquadrant.com> writes:
> Here is a patch that adds const decorations to many char * arguments in
> functions.  It should have no impact otherwise; there are very few code
> changes caused by it.

+1 in general ...

> Some functions have a strtol()-like behavior
> where they take in a const char * and return a pointer into that as
> another argument.  In those cases, I added a cast or two.

... but I'm not sure that it's an improvement in cases where you have to
cast away the const somewhere else.  I realize that strtol has an ancient
pedigree, but I do not think it's very good design.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Add some const decorations to prototypes

2017-10-31 Thread Peter Eisentraut
Here is a patch that adds const decorations to many char * arguments in
functions.  It should have no impact otherwise; there are very few code
changes caused by it.  Some functions have a strtol()-like behavior
where they take in a const char * and return a pointer into that as
another argument.  In those cases, I added a cast or two.

Generally, I find these const decorations useful as easy function
documentation.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From a8163e84c83887e4a3b81642c137995932701bb5 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Tue, 31 Oct 2017 10:34:31 -0400
Subject: [PATCH] Add some const decorations to prototypes

---
 contrib/cube/cubeparse.y   | 12 
 contrib/dict_xsyn/dict_xsyn.c  |  2 +-
 contrib/fuzzystrmatch/dmetaphone.c |  4 +--
 contrib/pgcrypto/pgcrypto.c|  4 +--
 contrib/seg/seg.c  |  4 +--
 contrib/seg/segdata.h  |  2 +-
 contrib/seg/segparse.y |  4 +--
 contrib/unaccent/unaccent.c|  2 +-
 contrib/uuid-ossp/uuid-ossp.c  |  2 +-
 src/backend/access/common/reloptions.c | 12 
 src/backend/access/gist/gistbuild.c|  2 +-
 src/backend/access/transam/xact.c  |  6 ++--
 src/backend/access/transam/xlogarchive.c   |  4 +--
 src/backend/catalog/heap.c | 10 +++
 src/backend/commands/comment.c |  4 +--
 src/backend/commands/event_trigger.c   |  4 +--
 src/backend/commands/extension.c   |  4 +--
 src/backend/commands/indexcmds.c   |  8 +++---
 src/backend/commands/opclasscmds.c |  2 +-
 src/backend/commands/tablecmds.c   | 16 +--
 src/backend/commands/typecmds.c|  6 ++--
 src/backend/commands/view.c|  2 +-
 src/backend/libpq/auth.c   | 24 
 src/backend/libpq/hba.c|  6 ++--
 src/backend/parser/parse_expr.c|  2 +-
 src/backend/parser/parse_func.c|  4 +--
 src/backend/parser/parse_relation.c|  8 +++---
 src/backend/parser/parse_target.c  |  2 +-
 src/backend/port/dynloader/darwin.c|  8 +++---
 src/backend/port/dynloader/darwin.h|  4 +--
 src/backend/port/dynloader/hpux.c  |  4 +--
 src/backend/port/dynloader/hpux.h  |  4 +--
 src/backend/port/dynloader/linux.c |  4 +--
 src/backend/postmaster/postmaster.c|  2 +-
 src/backend/replication/basebackup.c   |  8 +++---
 src/backend/rewrite/rewriteDefine.c|  4 +--
 src/backend/snowball/dict_snowball.c   |  2 +-
 src/backend/storage/lmgr/lwlock.c  |  8 +++---
 src/backend/tsearch/dict_thesaurus.c   |  2 +-
 src/backend/tsearch/spell.c|  4 +--
 src/backend/utils/adt/float.c  | 16 +--
 src/backend/utils/adt/genfile.c|  2 +-
 src/backend/utils/adt/ruleutils.c  |  4 +--
 src/backend/utils/adt/varlena.c|  2 +-
 src/backend/utils/adt/xml.c| 32 +++---
 src/bin/initdb/initdb.c| 12 
 src/bin/pg_dump/pg_backup_db.c |  2 +-
 src/bin/pg_dump/pg_backup_db.h |  2 +-
 src/bin/pg_rewind/fetch.c  |  2 +-
 src/bin/pg_rewind/fetch.h  |  2 +-
 src/bin/pg_upgrade/option.c|  6 ++--
 src/bin/pg_upgrade/pg_upgrade.c|  4 +--
 src/bin/pg_waldump/pg_waldump.c|  2 +-
 src/bin/pgbench/pgbench.c  |  4 +--
 src/include/access/gist_private.h  |  2 +-
 src/include/access/reloptions.h| 14 +-
 src/include/access/xact.h  |  6 ++--
 src/include/access/xlog_internal.h |  4 +--
 src/include/catalog/heap.h |  2 +-
 src/include/commands/comment.h |  4 +--
 src/include/commands/defrem.h  |  4 +--
 src/include/commands/typecmds.h|  2 +-
 src/include/commands/view.h|  2 +-
 src/include/executor/tablefunc.h   |  8 +++---
 src/include/parser/parse_relation.h|  6 ++--
 src/include/parser/parse_target.h  |  2 +-
 src/include/postmaster/bgworker.h  |  2 +-
 src/include/rewrite/rewriteDefine.h|  2 +-
 

Re: [HACKERS] Query regarding permission on table_column%type access

2017-10-31 Thread Tom Lane
Stephen Frost <sfr...@snowman.net> writes:
> * Neha Sharma (neha.sha...@enterprisedb.com) wrote:
>> I have observed that even if the user does not have permission on a
>> table(created in by some other user),the function parameter still can have
>> a parameter of that table_column%type.

> This is because the creation of the table also creates a type of the
> same name and the type's permissions are independent of the table's.  I
> imagine that you could REVOKE USAGE ON TYPE from the type and deny
> access to that type if you wanted to.

Right.  (I checked, seems to work as expected.)

> I'm not sure that we should change the REVOKE on the table-level to also
> mean to REVOKE access to the type automatically (and what happens if you
> GRANT the access back for the table..?

It seems pretty silly for privileges on table rowtypes to behave
differently from those on other rowtypes.

        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove secondary checkpoint

2017-10-31 Thread Simon Riggs
On 30 October 2017 at 18:58, Simon Riggs <si...@2ndquadrant.com> wrote:
> On 30 October 2017 at 15:22, Simon Riggs <si...@2ndquadrant.com> wrote:
>
>>> You forgot to update this formula in xlog.c:
>>> distance = (2.0 + CheckPointCompletionTarget) * 
>>> CheckPointDistanceEstimate;
>>> /* add 10% for good measure. */
>>> distance *= 1.10;
>>> You can guess easily where the mistake is.
>>
>> Doh - fixed that before posting, so I must have sent the wrong patch.
>>
>> It's the 10%, right? ;-)
>
> So, there are 2 locations that mention 2.0 in xlog.c
>
> I had already fixed one, which is why I remembered editing it.
>
> The other one you mention has a detailed comment above it explaining
> why it should be 2.0 rather than 1.0, so I left it.
>
> Let me know if you still think it should be removed? I can see the
> argument both ways.
>
> Or maybe we need another patch to account for manual checkpoints.

Revised patch to implement this

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


remove_secondary_checkpoint.v5.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: PANIC: invalid index offnum: 186 when processing BRIN indexes in VACUUM

2017-10-31 Thread Tom Lane
Michael Paquier <michael.paqu...@gmail.com> writes:
> On Tue, Oct 31, 2017 at 4:56 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> Yeah, we're still missing an understanding of why we didn't see it
>> before; the inadequate locking was surely there before.

> Because 24992c6d has added a check on the offset number by using
> PageIndexTupleDeleteNoCompact() in brin_doupdate() making checks
> tighter, no?

No, I don't see how it's tighter.  The old code matched supplied
offnum(s) against the indexes of not-unused items, and then after
that loop it complained if they weren't all matched.  So it should
also have failed, albeit with a different error message, if it were
passed an offnum corresponding to a no-longer-live tuple.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


<    3   4   5   6   7   8   9   10   11   12   >