Re: [HACKERS] Relax requirement for INTO with SELECT in pl/pgsql

2016-06-05 Thread Pavel Stehule
2016-06-05 5:45 GMT+02:00 David G. Johnston :

> On Tuesday, March 22, 2016, Merlin Moncure  wrote:
>
>>
>> Anyways, here's the patch with documentation adjustments as promised.
>> I ended up keeping the 'without result' section because it contained
>> useful information about plan caching,
>>
>> merlin
>>
>> diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml
>> new file mode 100644
>> index 9786242..512eaa7
>> *** a/doc/src/sgml/plpgsql.sgml
>> --- b/doc/src/sgml/plpgsql.sgml
>> *** my_record.user_id := 20;
>> *** 904,910 
>>   Executing a Command With No Result
>>
>>   
>> !  For any SQL command that does not return rows, for example
>>INSERT without a RETURNING clause, you can
>>execute the command within a PL/pgSQL
>> function
>>just by writing the command.
>> --- 904,910 
>>   Executing a Command With No Result
>>
>>   
>> !  For any SQL command, for example
>>INSERT without a RETURNING clause, you can
>>execute the command within a PL/pgSQL
>> function
>>just by writing the command.
>> *** my_record.user_id := 20;
>
>
> This just seems odd...a bit broader approach here would be nice.
> Especially since now it's not the command that doesn't have a result but
> the user decision to not capture any result that may be present.  Using
> insert returning for the example here is just, again, odd...
>
> Removing PERFORM requires a bit more reframing of the docs than you've
> done here; too much was influenced by its presence.
>
>
>> *** 925,972 
>>.
>>   
>>
>> - 
>> -  Sometimes it is useful to evaluate an expression or
>> SELECT
>> -  query but discard the result, for example when calling a function
>> -  that has side-effects but no useful result value.  To do
>> -  this in PL/pgSQL, use the
>> -  PERFORM statement:
>> -
>> - 
>> - PERFORM query;
>> - 
>> -
>> -  This executes query and discards the
>> -  result.  Write the query the same
>> -  way you would write an SQL SELECT command, but replace
>> the
>> -  initial keyword SELECT with PERFORM.
>> -  For WITH queries, use PERFORM and then
>> -  place the query in parentheses.  (In this case, the query can only
>> -  return one row.)
>> -  PL/pgSQL variables will be
>> -  substituted into the query just as for commands that return no
>> result,
>> -  and the plan is cached in the same way.  Also, the special variable
>> -  FOUND is set to true if the query produced at
>> -  least one row, or false if it produced no rows (see
>> -  ).
>> - 
>> -
>>   
>>
>> !   One might expect that writing SELECT directly
>> !   would accomplish this result, but at
>> !   present the only accepted way to do it is
>> !   PERFORM.  A SQL command that can return rows,
>> !   such as SELECT, will be rejected as an error
>> !   unless it has an INTO clause as discussed in the
>> !   next section.
>>
>>   
>>
>>   
>>An example:
>>   
>> ! PERFORM create_mv('cs_session_page_requests_mv', my_query);
>>   
>>   
>>  
>> --- 925,944 
>>.
>>   
>>
>>   
>>
>> !   In older versions of PostgreSQL, it was mandatory to use
>> !   PERFORM instead of SELECT
>> !   when the query could return data that was not captured into
>> !   variables.  This requirement has been relaxed and usage of
>> !   PERFORM has been deprecated.
>>
>>   
>>
>>   
>>An example:
>>   
>> ! SELECT create_mv('cs_session_page_requests_mv', my_query);
>
>
> I'd consider making the note into a paragraph and then saying that the
> select form and perform form are equivalent by writing both out one after
> the other.  The note brings emphasis that is not necessary if we want to
> de-emphasize perform.
>
>   
>>   
>>  
>> *** GET DIAGNOSTICS integer_var = ROW_COUNT;
>> *** 1480,1491 
>> 
>> 
>>  
>> ! A PERFORM statement sets
>> FOUND
>>   true if it produces (and discards) one or more rows, false
>> if
>>   no row is produced.
>>  
>> 
>> 
>>  
>>   UPDATE, INSERT, and
>> DELETE
>>   statements set FOUND true if at least one
>> --- 1452,1471 
>> 
>> 
>>  
>> ! A SELECT statement sets FOUND
>>   true if it produces (and discards) one or more rows, false
>> if
>>   no row is produced.
>>  
>
>
> This could be worded better. It will return true regardless of whether the
> result is discarded.
>
>
>> 
>> 
>> +
>> + A PERFORM statement sets
>> FOUND
>> + true if it produces (and discards) one or more rows, 

[HACKERS] Reference to UT1

2016-06-05 Thread Thomas Munro
Hi

The manual[1] says "Technically,PostgreSQL uses UT1 rather than UTC
because leap seconds are not handled."  I'm certainly no expert on
this stuff but it seems to me that we are using POSIX time[2] or Unix
time, not UT1.

By my reading, UT1 is one of several time scales of interest to
scientists based on the earth's rotational angle and therefore has
seconds of variable duration detected after the fact using
astronomical observations, whereas POSIX time is a simplification of
the UTC timescale which is based on SI seconds defined by an atomic
clock.  The simplification is that POSIX time completely ignores leap
seconds.  It cannot address leap seconds inserted by UTC, and can
address phantom non-existent seconds if UTC ever deletes a second.

While you could argue that Postgres is time scale agnostic, and that
users are free to consider their time data to refer to points on any
time scale that has strictly 86400 seconds per day, I see two
arguments for the time scale being POSIX in particular:

1.  We tolerate UTC leap seconds like '23:59:60' on input.  Then we
throw away some information by shifting it into the 'next' second,
because our model can't represent the leap second itself (though we
could: we have the table of leap seconds at
src/timezone/data/leapseconds).  That string does not name a valid
time on the TAI, GPS, UT0, UT1, UT1R or UT2 time scales.  It does name
a valid UTC time (at least on certain dates), and fits with the theory
that we are using POSIX-style simplified UTC.  We're actively doing
UTC-with-gaps (like most software).

2.  While your computer's implementation of gettimeofday() may in
theory be hooked up to whatever gizmo is needed to measure the earth,
in practice all computers try to approximate or track atomic clocks.
The duration of a second is based on caesium atoms, not observations
of a wobbly planet.  If Postgres were somehow using UT1, then now()
would be up to 0.9 seconds away from the UTC time you see on
electronic devices you see all around you (the maximum drift the UTC
people allow before they add or delete a second).  What actually
happens is that gettimeofday() tracks UTC, except during the bits that
POSIX time can't address (and immediately nearby where various
strategies are used to wallpaper over the gaps with time smearing
algorithms, depending on your OS, ntpd etc which most people never
have to worry about, unless, for example, they are running a stock
exchange which needs to open immediately after a leap second as
happened in Japan recently, in which case they should worry).

[1] https://www.postgresql.org/docs/9.5/static/view-pg-timezone-names.html
[2] https://en.wikipedia.org/wiki/Unix_time

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-06-05 Thread Tomas Vondra

Hi,

While this thread was effectively superseded by the 'new design' thread 
[1], I'd like to address a few points raised here, as they are relevant 
for the new design (at least I believe so).


[1] https://www.postgresql.org/message-id/31041.1465069...@sss.pgh.pa.us

On 06/04/2016 08:15 PM, Tom Lane wrote:
...


* Make RelationGetFKeyList cache a list of ForeignKeyOptInfo structs,
not just constraint OIDs.  It's insane that the relcache scans
pg_constraint to collect those OIDs and then the planner re-reads all
those same rows on every planning cycle.  Allow the relcache to return
a pointer to the list in-cache, and require the planner to copy that
data before making any more cache requests.  The planner would run
through the list, skipping single-key entries and entries leading to
irrelevant tables, and copy out just the items that are useful for
the current query (probably adding query-specific table RTE indexes
at the same time).


That seems like a fairly straightforward change, and I'm not opposed to 
doing that. However RelationGetFKeyList is basically a modified copy of 
RelationGetIndexList, so it shares the same general behavior, including 
caching of OIDs and then constructing IndexOptInfo objects on each 
get_relation_info() call. Why is it 'insane' for foreign keys but not 
for indexes? Or what am I missing?




* Personally I'd probably handle the "ignore irrelevant tables" test
with a list_member_oid test on a list of relation OIDs, not a
hashtable. It's unlikely that there are enough tables in the query to
justify building a hashtable.


OK



* All of the above should happen only if the query contains multiple
tables; there is no reason to expend even one cycle on loading FK data
in a simple single-table query.


OK



... snip the part about nested loops (new design thread) ...

Also, there are serious bugs remaining, even without considering planning
speed.  An example I just noticed is that quals_match_foreign_key actually
fails entirely to ensure that it's found a match to the FK, because there
is no check on whether foreignrel has anything to do with the FK.  That
is, this bit:

 * We make no attempt in checking that this foreign key actually
 * references 'foreignrel', the reasoning here is that we may be able
 * to match the foreign key to an eclass member Var of a RestrictInfo
 * that's in qualslist, this Var may belong to some other relation.

would be okay if you checked after identifying a matching eclass member
that it belonged to the FK's referenced table, but AFAICS there is no such
check anywhere.



Perhaps I'm missing something, but I thought this is checked by these 
conditions in quals_match_foreign_key():


1) with ECs (line 3990)

if (foreignrel->relid == var->varno &&
fkinfo->confkeys[i] == var->varattno)
foundvarmask |= 1;

2) without ECs (line 4019)


if ((foreignrel->relid == leftvar->varno) &&
(fkrel->relid == rightvar->varno) &&
(fkinfo->confkeys[i] == leftvar->varattno) &&
(fkinfo->conkeys[i] == rightvar->varattno))
{
...
}
else if ((foreignrel->relid == rightvar->varno) &&
 (fkrel->relid == leftvar->varno) &&
 (fkinfo->confkeys[i] == rightvar->varattno) &&
 (fkinfo->conkeys[i] == leftvar->varattno))
{
...
}

Or does that fail for some reason?

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New design for FK-based join selectivity estimation

2016-06-05 Thread Tomas Vondra

Hi,

On 06/04/2016 09:44 PM, Tom Lane wrote:

This is a branch of the discussion in
https://www.postgresql.org/message-id/flat/20160429102531.GA13701%40huehner.biz
but I'm starting a new thread as the original title is getting
increasingly off-topic.

I complained in that thread that the FK join selectivity patch had a
very brute-force approach to matching join qual clauses to FK
constraints, requiring a total of seven nested levels of looping to
get anything done, and expensively rediscovering the same facts over
and over.  Here is a sketch of what I think is a better way:

* During the planner's catalog data collection phase, construct a
single list (per PlannerInfo, not per RTE) of all relevant FKs.
In this data structure, each FK's referenced and referencing relations
are identified by relid (that is, RTE index) not just OID.  In the case
of a query containing a self-join, that would mean that the same FK
constraint could give rise to multiple list entries, one for each RTE
occurrence of its referenced or referencing target relation.  FKs not
relating two tables of the query are necessarily not in this list,
and we could also choose not to include single-column FKs.

* After finalizing equivalence classes, make a single pass through
the FK list and check each column-pair to see if it can be matched
to any EC (that is, both source and target columns are in the EC and
the comparison operator is in the EC's opfamily).  Mark each matched
column pair in the FK list data structure with a pointer to the EC.

* When performing join selectivity estimation, run through the FK list
a single time, ignoring entries that do not link a member of the join's
LHS to a member of the join's RHS.  This is a fairly cheap test given
the relid labels; it'd be approximately

if ((bms_is_member(fkinfo->src_relid, outer_rel->relids) &&
 bms_is_member(fkinfo->dst_relid, inner_rel->relids)) ||
(bms_is_member(fkinfo->dst_relid, outer_rel->relids) &&
 bms_is_member(fkinfo->src_relid, inner_rel->relids)))

For each potentially interesting FK entry, run through the join
qual list.  A RestrictInfo that was generated from an EC matches
the FK if and only if that EC appears in the per-column markings;
other RestrictInfos are matched to one of the FK columns normally
(I think this path can ignore FK columns that have been matched to ECs).
At the end of that, we can determine whether all the FK columns have
been matched to some qual item, and we have a count and/or bitmapset
of the qual list entries that matched the FK.  Remember the FK entry
with the largest such count.

* After scanning the list, we have our best FK match and can proceed
with making the actual selectivity estimate as in the current code.

With this approach, we have an iteration structure like

  * once per join relation created
* for each foreign key constraint relevant to the query
(but skipping the loops below if it's not relevant to this join)
  * for each join qual for the joinrel pair
* for each key column in that FK

which gets us down from seven nested loops to four, and also makes the
work done in the innermost loops significantly cheaper for the EC case,
which will be the more common one.  It's also much easier to make this
structure do zero extra work when there are no relevant FKs, which is
a pleasant property for extra planner work to have.

Now, we'll also add some per-FK-per-EC setup work, but my guess is
that that's negligible compared to the per-join-relation work.

It's possible that we could reduce the cost of matching non-EC join
quals as well, with some trick along the line of pre-matching non-EC
RestrictInfos to FK items.  I'm unsure that that is worth the trouble
though.

Thoughts?


Firstly thanks for looking into this, and also for coming up with a very 
detailed design proposal.


I do agree this new design seems superior to the current one and it 
seems worth a try. I'd like to see how far we can get over the next few 
days (say, until the end of the week).


One of the recent issues with the current design was handling of 
inheritance / appendrels. ISTM the proposed design has the same issue, 
no? What happens if the relations are partitioned?


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Typo in parallel comment of heap_delete()

2016-06-05 Thread Jim Nasby

I'm pretty sure this is a typo...
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 950bfc8..c061507 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -3019,7 +3019,7 @@ heap_delete(Relation relation, ItemPointer tid,
Assert(ItemPointerIsValid(tid));
 
/*
-* Forbid this during a parallel operation, lets it allocate a combocid.
+* Forbid this during a parallel operation, lest it allocate a combocid.
 * Other workers might need that combocid for visibility checks, and we
 * have no provision for broadcasting it to them.
 */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel query and temp_file_limit

2016-06-05 Thread Peter Geoghegan
On Wed, May 18, 2016 at 12:01 PM, Peter Geoghegan  wrote:
>> I think for 9.6 we just have to document this issue.  In the next
>> release, we could (and might well want to) try to do something more
>> clever.
>
> Works for me. You may wish to update comments within fd.c at the same time.

I've created a 9.6 open issue for this.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New design for FK-based join selectivity estimation

2016-06-05 Thread Tom Lane
I wrote:
> ... Here is a sketch of what I think is a better way:
> ...
> It's possible that we could reduce the cost of matching non-EC join
> quals as well, with some trick along the line of pre-matching non-EC
> RestrictInfos to FK items.  I'm unsure that that is worth the trouble
> though.

After further thought, I believe that may well be worth doing.  That
is, someplace after deconstruct_jointree(), examine all the FKs and
match their columns not only to ECs but to non-EC joinclauses, which
we could find by trawling the joininfo list for either subject relation.
We'd end up with a EC pointer and/or a list of non-EC RestrictInfos
for each FK column.

The thing that makes this attractive is that at the end of this matching,
we would know exactly whether each FK is matched to the query as a whole:
either all its columns have matches, or they don't.  It's not necessary to
re-determine that for each joinrel pair that includes the FK's two subject
relations.  So the per-join-relation work would reduce to scanning the FK
list once to find the matched FK(s) that connect relations on opposite
sides of the join.  Once we've found such an FK, identifying which join
qual list items should be dropped in favor of applying the FK's
selectivity is also really easy: we just check the column markings.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in joinrels.c

2016-06-05 Thread Tom Lane
Michael Paquier  writes:
> Still, on back branches I think that it would be a good idea to have a
> better error message for the ones that already throw an error, like
> "trigger with OID %u does not exist". Switching from elog to ereport
> would be a good idea, but wouldn't it be a problem to change what is
> user-visible?

If we're going to touch this behavior in the back branches, I would
vote for changing it the same as we do in HEAD.  I do not think that
making a user-visible behavior change in a minor release and then a
different change in the next major is good strategy.

But, given the relative shortage of complaints from the field, I'd
be more inclined to just do nothing in back branches.  There might
be people out there with code depending on the current behavior,
and they'd be annoyed if we change it in a minor release.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [sqlsmith] Failed assertion in postgres_fdw/deparse.c:1116

2016-06-05 Thread Andreas Seltenreich
Creating some foreign tables via postgres_fdw in the regression db of
master as of de33af8, sqlsmith triggers the following assertion:

TRAP: FailedAssertion("!(const Node*)(var))->type) == T_Var))", File: 
"deparse.c", Line: 1116)

gdb says var is holding a T_PlaceHolderVar instead.  In a build without
assertions, it leads to an error later:

ERROR:  cache lookup failed for type 0

Recipe:

--8<---cut here---start->8---
create extension postgres_fdw;
create server myself foreign data wrapper postgres_fdw;
create schema fdw_postgres;
create user mapping for public server myself options (user :'USER');
import foreign schema public from server myself into fdw_postgres;
select subq_0.c0 as c0 from
   (select 31 as c0 from fdw_postgres.a as ref_0
  where 93 >= ref_0.aa) as subq_0
   right join fdw_postgres.rtest_vview5 as ref_1
   on (subq_0.c0 = ref_1.a )
   where 92 = subq_0.c0;
--8<---cut here---end--->8---

regards,
Andreas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Statement timeout

2016-06-05 Thread Tatsuo Ishii
>> BTW, aren't you missing a re-enable of the timeout for statements after
>> the first one?
> 
> Will check.

You are right. Here is the revised patch.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index b185c1b..88f5c54 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -149,6 +149,11 @@ static bool doing_extended_query_message = false;
 static bool ignore_till_sync = false;
 
 /*
+ * Flag to keep track of whether we have started statement timeout timer.
+ */
+static bool st_timeout = false;
+
+/*
  * If an unnamed prepared statement exists, it's stored here.
  * We keep it separate from the hashtable kept by commands/prepare.c
  * in order to reduce overhead for short-lived queries.
@@ -188,6 +193,8 @@ static bool IsTransactionStmtList(List *parseTrees);
 static void drop_unnamed_stmt(void);
 static void SigHupHandler(SIGNAL_ARGS);
 static void log_disconnections(int code, Datum arg);
+static void enable_statement_timeout(void);
+static void disable_statement_timeout(void);
 
 
 /* 
@@ -1227,6 +1234,11 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	start_xact_command();
 
 	/*
+	 * Set statement timeout running, if any
+	 */
+	enable_statement_timeout();
+
+	/*
 	 * Switch to appropriate context for constructing parsetrees.
 	 *
 	 * We have two strategies depending on whether the prepared statement is
@@ -1516,6 +1528,11 @@ exec_bind_message(StringInfo input_message)
 	 */
 	start_xact_command();
 
+	/*
+	 * Set statement timeout running, if any
+	 */
+	enable_statement_timeout();
+
 	/* Switch back to message context */
 	MemoryContextSwitchTo(MessageContext);
 
@@ -1931,6 +1948,11 @@ exec_execute_message(const char *portal_name, long max_rows)
 	start_xact_command();
 
 	/*
+	 * Set statement timeout running, if any
+	 */
+	enable_statement_timeout();
+
+	/*
 	 * If we re-issue an Execute protocol request against an existing portal,
 	 * then we are only fetching more rows rather than completely re-executing
 	 * the query from the start. atStart is never reset for a v3 portal, so we
@@ -2002,6 +2024,11 @@ exec_execute_message(const char *portal_name, long max_rows)
 			 * those that start or end a transaction block.
 			 */
 			CommandCounterIncrement();
+
+			/*
+			 * We need to reset statement timeout if already set.
+			 */
+			disable_statement_timeout();
 		}
 
 		/* Send appropriate CommandComplete to client */
@@ -2433,14 +2460,10 @@ start_xact_command(void)
 (errmsg_internal("StartTransactionCommand")));
 		StartTransactionCommand();
 
-		/* Set statement timeout running, if any */
-		/* NB: this mustn't be enabled until we are within an xact */
-		if (StatementTimeout > 0)
-			enable_timeout_after(STATEMENT_TIMEOUT, StatementTimeout);
-		else
-			disable_timeout(STATEMENT_TIMEOUT, false);
-
 		xact_started = true;
+
+		/* Set statement timeout running, if any */
+		enable_statement_timeout();
 	}
 }
 
@@ -2450,7 +2473,7 @@ finish_xact_command(void)
 	if (xact_started)
 	{
 		/* Cancel any active statement timeout before committing */
-		disable_timeout(STATEMENT_TIMEOUT, false);
+		disable_statement_timeout();
 
 		/* Now commit the command */
 		ereport(DEBUG3,
@@ -4510,3 +4533,51 @@ log_disconnections(int code, Datum arg)
 	port->user_name, port->database_name, port->remote_host,
   port->remote_port[0] ? " port=" : "", port->remote_port)));
 }
+
+/*
+ * Set statement timeout if any.
+ */
+static void enable_statement_timeout(void)
+{
+	if (!st_timeout)
+	{
+		if (StatementTimeout > 0)
+		{
+
+			/*
+			 * Sanity check
+			 */
+			if (!xact_started)
+			{
+ereport(ERROR,
+		(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+		 errmsg("Transaction must have been already started to set statement timeout")));
+			}
+
+			ereport(DEBUG3,
+	(errmsg_internal("Set statement timeout")));
+
+			enable_timeout_after(STATEMENT_TIMEOUT, StatementTimeout);
+			st_timeout = true;
+		}
+		else
+			disable_timeout(STATEMENT_TIMEOUT, false);
+	}
+}
+
+/*
+ * Reset statement timeout if any.
+ */
+static void disable_statement_timeout(void)
+{
+	if (st_timeout)
+	{
+		ereport(DEBUG3,
+	(errmsg_internal("Disable statement timeout")));
+
+		/* Cancel any active statement timeout */
+		disable_timeout(STATEMENT_TIMEOUT, false);
+
+		st_timeout = false;
+	}
+}

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in joinrels.c

2016-06-05 Thread Michael Paquier
On Sun, Jun 5, 2016 at 4:28 PM, Michael Paquier
 wrote:
> On Sun, Jun 5, 2016 at 12:29 AM, Tom Lane  wrote:
>> Michael Paquier  writes:
>>> This is still failing:
>>> =# select indexdef from pg_catalog.pg_indexes where indexdef is not NULL;
>>> ERROR:  XX000: cache lookup failed for index 2619
>>> LOCATION:  pg_get_indexdef_worker, ruleutils.c:1054
>>> Do we want to improve at least the error message?
>>
>> Maybe we should just make the function silently return NULL if the OID
>> doesn't refer to an index relation.  There's precedent for that sort
>> of behavior ...
>
> For views it is simply returned "Not a view", and for rules that's
> "-". All the others behave like similarly to indexes:
> =# select pg_get_constraintdef(0);
> ERROR:  XX000: cache lookup failed for constraint 0
> =# select pg_get_triggerdef(0);
> ERROR:  XX000: could not find tuple for trigger 0
> =# select pg_get_functiondef(0);
> ERROR:  XX000: cache lookup failed for function 0
> =# select pg_get_viewdef(0);
>  pg_get_viewdef
> 
>  Not a view
> (1 row)
> =# select pg_get_ruledef(0);
>  pg_get_ruledef
> 
>  -
> (1 row)
>
> So yes we could just return NULL for all of them, or make them throw
> an error. Anyway, my vote goes for a HEAD-only change, and just throw
> NULL... This way an application still has the option to fallback to
> something else with a CASE at SQL level without using plpgsql to catch
> the error.
>
> Also, I have not monitored the code much regarding some code paths
> that rely on the existing rules, but I would imagine that there are
> things depending on the current behavior as well.

Still, on back branches I think that it would be a good idea to have a
better error message for the ones that already throw an error, like
"trigger with OID %u does not exist". Switching from elog to ereport
would be a good idea, but wouldn't it be a problem to change what is
user-visible?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in joinrels.c

2016-06-05 Thread Michael Paquier
On Sun, Jun 5, 2016 at 12:29 AM, Tom Lane  wrote:
> Michael Paquier  writes:
>> This is still failing:
>> =# select indexdef from pg_catalog.pg_indexes where indexdef is not NULL;
>> ERROR:  XX000: cache lookup failed for index 2619
>> LOCATION:  pg_get_indexdef_worker, ruleutils.c:1054
>> Do we want to improve at least the error message?
>
> Maybe we should just make the function silently return NULL if the OID
> doesn't refer to an index relation.  There's precedent for that sort
> of behavior ...

For views it is simply returned "Not a view", and for rules that's
"-". All the others behave like similarly to indexes:
=# select pg_get_constraintdef(0);
ERROR:  XX000: cache lookup failed for constraint 0
=# select pg_get_triggerdef(0);
ERROR:  XX000: could not find tuple for trigger 0
=# select pg_get_functiondef(0);
ERROR:  XX000: cache lookup failed for function 0
=# select pg_get_viewdef(0);
 pg_get_viewdef

 Not a view
(1 row)
=# select pg_get_ruledef(0);
 pg_get_ruledef

 -
(1 row)

So yes we could just return NULL for all of them, or make them throw
an error. Anyway, my vote goes for a HEAD-only change, and just throw
NULL... This way an application still has the option to fallback to
something else with a CASE at SQL level without using plpgsql to catch
the error.

Also, I have not monitored the code much regarding some code paths
that rely on the existing rules, but I would imagine that there are
things depending on the current behavior as well.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers