Re: [HACKERS] Online DW

2016-06-10 Thread Sridhar N Bamandlapally
Ok, let me put this way,

I need every transaction coming from application sync with both production
and archive db,
but the transactions I do to clean old data(before 7 days) on production db
in daily maintenance window should not sync with archive db,

Archive db need read-only, used for maintaining integrity with other
business applications

Issue here is,
1. etl is scheduler, cannot run on every transaction, even if it does, its
expensive

2. Materialize view(refresh on commit) or slony, will also sync clean-up
transactions

3. Replication is not archive, definitely not option

I say, every online archive db is use case for this.

Thanks
Sridhar
Opentext


On 10 Jun 2016 22:36, "David G. Johnston" 
wrote:

> On Fri, Jun 10, 2016 at 4:11 AM, Sridhar N Bamandlapally <
> sridhar@gmail.com> wrote:
>
>> Hi
>>
>> Is there any feature in PostgreSQL where online DW (Dataware housing) is
>> possible ?
>>
>> am looking for scenario like
>>
>> 1. Production DB will have CURRENT + LAST 7 DAYS data only
>>
>> 2. Archive/DW DB will have CURRENT + COMPLETE HISTORY
>>
>> expecting something like streaming, but not ETL
>>
>>
> ​The entire DB couldn't operate this way since not every record has a
> concept of time and if you use any kind of physical time you are going to
> have issues as well.
>
> First impression is you want to horizontally partition your
> "time-impacted" tables so that each partition contains only data having the
> same ISO Week number in the same ISO Year.
>
> Remove older tables from the inheritance and stick them on a separate
> tablespace and/or stream them to another database.
>
> As has been mentioned there are various tools out there today that can
> likely be used to fulfill whatever fundamental need you have.  "Not ETL" is
> not a need though, its at best a "nice-to-have" unless you are willing to
> forgo any solution to your larger problem just because the implementation
> is not optimal.
>
> Unless you define your true goals and constraints its going to be hard to
> make recommendations.
>
> David J.
>
>


Re: [HACKERS] [COMMITTERS] pgsql: Don't generate parallel paths for rels with parallel-restricted

2016-06-10 Thread Amit Kapila
On Fri, Jun 10, 2016 at 8:20 PM, Tom Lane  wrote:
>
> Amit Kapila  writes:
> > On Thu, Jun 9, 2016 at 11:38 PM, Tom Lane  wrote:
> >> Robert Haas  writes:
> >>> Could you answer my question about whether adjust_appendrel_attrs()
> >>> might translate Vars into non-Vars?
>
> >> Yes, absolutely.
>
> > Isn't this true only for UNION ALL cases and not for inheritance child
> > relations (at least that is what seems to be mentioned in comments
> > for translated_vars in AppendRelInfo)?
>
> Correct.
>
> > If that is right, then I think
> > there shouldn't be a problem w.r.t parallel plans even if
> > adjust_appendrel_attrs() translate Vars into non-Vars, because for UNION
> > ALL it seems parent rels rtekind is RTE_SUBQUERY and we consider such
rels
> > as parallel unsafe (see set_rel_consider_parallel()).
>
> Hm, that would explain why you haven't been able to exhibit a bug on HEAD
> as it stands;
>

Right, so I have moved "Failed assertion in parallel worker
(ExecInitSubPlan)" item to CLOSE_WAIT state as I don't think there is any
known pending issue in that item.  I have moved it to CLOSE_WAIT state
because we have derived our queries to reproduce the problem based on
original report[1].  If next run of sqlsmith doesn't show any problem in
this context then we will move it to resolved.


[1] - https://www.postgresql.org/message-id/8760use0hl.fsf%40credativ.de

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


[HACKERS] Confusing recovery message when target not hit

2016-06-10 Thread Thom Brown
Hi all,

When recovery_target_time is set, but recovery finishes before it reaches
that time, it outputs "before 2000-01-01 00:00:00+00" to the .history
file.  This is because it uses recoveryStopTime, which is initialised to 0,
but is never set, and is then passed to timestamptz_to_str, which gives it
this date output.

A similar problem exists for recovery_target_xid.  When recovery finishes
before reaching the specified xid, it outputs "before transaction 0" to the
.history file, which is also confusing.

Could we produce something more meaningful?  I've attached a patch which
changes it to say 'recovery reached consistency before recovery target time
of ""' and 'recovery reached consistency before
recovery target xid of ""'.

It may be the wrong way of going about it, but you get the idea of what I'm
suggesting we output instead.

Thom
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index e4645a3..c68697a 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -7105,15 +7105,25 @@ StartupXLOG(void)
 		 * timeline changed.
 		 */
 		if (recoveryTarget == RECOVERY_TARGET_XID)
-			snprintf(reason, sizeof(reason),
-	 "%s transaction %u",
-	 recoveryStopAfter ? "after" : "before",
-	 recoveryStopXid);
+			if (recoveryStopXid == 0)
+snprintf(reason, sizeof(reason),
+		"recovery reached consistency before recovery target xid of \"%u\"\n",
+		recoveryTargetXid);
+			else
+snprintf(reason, sizeof(reason),
+		 "%s transaction %u",
+		 recoveryStopAfter ? "after" : "before",
+		 recoveryStopXid);
 		else if (recoveryTarget == RECOVERY_TARGET_TIME)
-			snprintf(reason, sizeof(reason),
-	 "%s %s\n",
-	 recoveryStopAfter ? "after" : "before",
-	 timestamptz_to_str(recoveryStopTime));
+			if (recoveryStopTime == 0)
+snprintf(reason, sizeof(reason),
+		"recovery reached consistency before recovery target time of \"%s\"\n",
+		timestamptz_to_str(recoveryTargetTime));
+			else
+snprintf(reason, sizeof(reason),
+		 "%s %s\n",
+		 recoveryStopAfter ? "after" : "before",
+		 timestamptz_to_str(recoveryStopTime));
 		else if (recoveryTarget == RECOVERY_TARGET_NAME)
 			snprintf(reason, sizeof(reason),
 	 "at restore point \"%s\"",

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prepared statements and generic plans

2016-06-10 Thread 'br...@momjian.us'
On Tue, Jun  7, 2016 at 06:52:15AM +, Albe Laurenz wrote:
> Bruce Momjian wrote:
> >> !distinct column values, a generic plan assumes a column equality
> >> !comparison will match 33% of processed rows.  Column statistics
> >>
> >> ... assumes *that* a column equality comparison will match 33% of *the* 
> >> processed rows.
> > 
> > Uh, that seems overly wordy.  I think the rule is that if the sentence
> > makes sense without the words, you should not use them, but it is
> > clearly a judgement call in this case.  Do you agree?
> 
> My gut feeling is that at least the "the" should be retained, but mine
> are the guts of a German speaker.
> It is clearly a judgement call, so follow your instincts.

I think "that/the" would make sense if this sentence was referencing a
specific result.  The sentence is referencing a hypothetical, so I don't
think "that/the" is needed.

> > One more thing --- there was talk of moving some of this into chapter
> > 66, but as someone already mentioned, there are no subsections there
> > because it is a dedicated topic:
> > 
> > 66. How the Planner Uses Statistics.
> > 
> > I am not inclined to add a prepare-only section to that chapter.  On the
> > other hand, the issues described apply to PREPARE and to protocol-level
> > prepare, so having it in PREPARE also seems illogical.  However, I am
> > inclined to leave it in PREPARE until we are ready to move all of this
> > to chapter 66.
> 
> I think it would be ok to leave it where it is in your patch; while the
> paragraph goes into technical detail, it is still alright in the general
> documentation (but only just).

I researched moving some of this text into chapter 66, but found that
only some of it related to the optimizer.  I also realized that the text
applies to the libpq/wire protocol prepare cases too, so rather than
bounce readers to the PREPARE manual page, and then to chapter 66, I
just kept it all in PREPARE, with a reference from the wire protocol
prepare section.

Also, is it possible to do an EXPLAIN prepare() with the libpq/wire
protocol?  I can't do PREPARE EXPLAIN, but I can do EXPLAIN EXECUTE. 
However, I don't see any way to inject EXPLAIN into the libpq/wire
prepare case.  Can you specify prepare(EXPLAIN SELECT)?  (PREPARE
EXPLAIN SELECT throws a syntax error.)

Looking at how the code behaves, it seems custom plans that are _more_
expensive (plus planning cost) than the generic plan switch to the
generic plan after five executions, as now documented.  Custom plans
that are significantly _cheaper_ than the generic plan _never_ use the
generic plan.

Here is an example --- first load this SQL:

DROP TABLE IF EXISTS test;
CREATE TABLE test (c1 INT, c2 INT);
INSERT INTO test SELECT c1, abs(floor(random() * 4)-1) FROM 
generate_series(1, 1) AS a(c1);
INSERT INTO test SELECT c1, abs(floor(random() * 4)-1) FROM 
generate_series(10001, 15000) AS a(c1);
INSERT INTO test SELECT c1, abs(floor(random() * 4)-1) FROM 
generate_series(15001, 2) AS a(c1);
-- add non-uniformly-distributed values to 'c2'
INSERT INTO test SELECT 20001, 3;
INSERT INTO test SELECT 20002, 4;
CREATE INDEX i_test_c1 ON test (c1);
CREATE INDEX i_test_c2 ON test (c2);
ANALYZE test;
PREPARE prep_c1 AS SELECT * FROM test WHERE c1 = $1;
PREPARE prep_c2 AS SELECT * FROM test WHERE c2 = $1;

prep_c1 references 'c1', which is a unique column.  Any value used in
the EXECUTE, e.g. EXPLAIN EXECUTE prep_c1(1), existent or non-existent,
generates an index scan, and after five executions a generic index
scan is used.

For prep_c2, if you use the 50% common value '1', the first five
executions use a sequential scan, then the sixth is a generic Bitmap
Heap Scan. For the 25% value of '0' or '2',  the first five runs
generate a Bitmap Heap Scan, and a generic Bitmap Heap Scan on the sixth
and after.

For a prep_c2 value of 3 or any non-existent value, an Index Scan is
used, and a generic plan is never chosen, because the Index Scan is
significantly cheaper than the generic plan.

Updated patch attached.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
new file mode 100644
index 3829a14..6285dd0
*** a/doc/src/sgml/libpq.sgml
--- b/doc/src/sgml/libpq.sgml
*** PGresult *PQprepare(PGconn *conn,
*** 2303,2310 
 
  PQprepare creates a prepared statement for later
  execution with PQexecPrepared.  This feature allows
! commands that will be used repeatedly to be parsed and planned just
! once, rather than each time they are executed.
  PQprepare is supported only in protocol 3.0 and later
  connections; it will fail 

Re: [HACKERS] Perf Benchmarking and regression.

2016-06-10 Thread Andres Freund
On 2016-06-09 17:19:34 -0700, Andres Freund wrote:
> On 2016-06-09 14:37:31 -0700, Andres Freund wrote:
> > I'm writing a patch right now, planning to post it later today, commit
> > it tomorrow.
> 
> Attached.

And pushed. Thanks to Michael for noticing the missing addition of
header file hunk.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reviewing freeze map code

2016-06-10 Thread Alvaro Herrera
Robert Haas wrote:

> 3. vacuumlazy.c includes this code:
> 
> if (heap_prepare_freeze_tuple(tuple.t_data, FreezeLimit,
>   MultiXactCutoff, [nfrozen]))
> frozen[nfrozen++].offset = offnum;
> else if (heap_tuple_needs_eventual_freeze(tuple.t_data))
> all_frozen = false;
> 
> That's wrong, because a "true" return value from
> heap_prepare_freeze_tuple() means only that it has done *some*
> freezing work on the tuple, not that it's done all of the freezing
> work that will ever need to be done.  So, if the tuple's xmin can be
> frozen and is aborted but not older than vacuum_freeze_min_age, then
> heap_prepare_freeze_tuple() won't free xmax, but the page will still
> be marked all-frozen, which is bad.  I think it normally won't matter
> because the xmax will probably be hinted invalid anyway, since we just
> pruned the page which should have set hint bits everywhere, but if
> those hint bits were lost then we'd eventually end up with an
> accessible xmax pointing off into space.

Good catch.  Also consider multixact freezing: if there is a
long-running transaction which is a lock-only member of tuple's Xmax,
and the multixact needs freezing because it's older than the multixact
cutoff, we set the xmax to a new multixact which includes that old
locker.  See FreezeMultiXactId.

> My first thought was to just delete the "else" but that would be bad
> because we'd fail to set all-frozen immediately in a lot of cases
> where we should.  This needs a bit more thought than I have time to
> give it right now.

How about changing the return tuple of heap_prepare_freeze_tuple to
a bitmap?  Two flags: "Freeze [not] done" and "[No] more freezing
needed"

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed assertion in postgres_fdw/deparse.c:1116

2016-06-10 Thread Robert Haas
On Wed, Jun 8, 2016 at 8:27 AM, Robert Haas  wrote:
> On Tue, Jun 7, 2016 at 10:41 PM, Noah Misch  wrote:
>> [Action required within 72 hours.  This is a generic notification.]
>>
>> The above-described topic is currently a PostgreSQL 9.6 open item.  Robert,
>> since you committed the patch believed to have created it, you own this open
>> item.  If some other commit is more relevant or if this does not belong as a
>> 9.6 open item, please let us know.  Otherwise, please observe the policy on
>> open item ownership[1] and send a status update within 72 hours of this
>> message.  Include a date for your subsequent status update.  Testers may
>> discover new open items at any time, and I want to plan to get them all fixed
>> well in advance of shipping 9.6rc1.  Consequently, I will appreciate your
>> efforts toward speedy resolution.  Thanks.
>
> Discussion of this issue is still ongoing.  Accordingly, I intend to
> wait until that discussion has concluded before proceeding further.
> I'll check this thread again no later than Friday and send an update
> by then.

Although I have done a bit of review of this patch, it needs more
thought than I have so far had time to give it.  I will update again
by Tuesday.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallel.c is not marked as test covered

2016-06-10 Thread Robert Haas
On Fri, Jun 10, 2016 at 1:49 PM, Peter Eisentraut
 wrote:
> Regarding the patch that ended up being committed, I wonder if it is
> intentional that PL/pgSQL overwrites the context from the parallel worker.
> Shouldn't the context effectively look like
>
> ERROR:  message
> CONTEXT:  parallel worker
> CONTEXT:  PL/pgSQL function
>
> Elsewhere in this thread I suggested getting rid of the parallel worker
> context by default (except for debugging), but if we do want to keep it,
> then it seems to be a bug that a PL/pgSQL function can just eliminate it.

Several people have suggested getting rid of that now, so maybe we
should just go ahead and do it.

How would we make it available for debugging, though?  That seems like
something people will want.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reviewing freeze map code

2016-06-10 Thread Robert Haas
On Fri, Jun 10, 2016 at 1:59 PM, Andres Freund  wrote:
>> Master, but with an existing standby. So it could be related to
>> hot_standby_feedback or such.
>
> I just managed to trigger it again.
>
>
> #1  0x7fa1a73778da in __GI_abort () at abort.c:89
> #2  0x7f9f1395e59c in record_corrupt_item (items=items@entry=0x2137be0, 
> tid=0x7f9fb8681c0c)
> at /home/andres/src/postgresql/contrib/pg_visibility/pg_visibility.c:612
> #3  0x7f9f1395ead5 in collect_corrupt_items (relid=relid@entry=29449, 
> all_visible=all_visible@entry=0 '\000', all_frozen=all_frozen@entry=1 '\001')
> at /home/andres/src/postgresql/contrib/pg_visibility/pg_visibility.c:572
> #4  0x7f9f1395f476 in pg_check_frozen (fcinfo=0x7ffe5343a200) at 
> /home/andres/src/postgresql/contrib/pg_visibility/pg_visibility.c:292
> #5  0x005fdbec in ExecMakeTableFunctionResult (funcexpr=0x2168630, 
> econtext=0x2168320, argContext=, expectedDesc=0x2168ef0,
> randomAccess=0 '\000') at 
> /home/andres/src/postgresql/src/backend/executor/execQual.c:2211
> #6  0x00616992 in FunctionNext (node=node@entry=0x2168210) at 
> /home/andres/src/postgresql/src/backend/executor/nodeFunctionscan.c:94
> #7  0x005ffdcb in ExecScanFetch (recheckMtd=0x6166f0 
> , accessMtd=0x616700 , node=0x2168210)
> at /home/andres/src/postgresql/src/backend/executor/execScan.c:95
> #8  ExecScan (node=node@entry=0x2168210, accessMtd=accessMtd@entry=0x616700 
> , recheckMtd=recheckMtd@entry=0x6166f0 )
> at /home/andres/src/postgresql/src/backend/executor/execScan.c:145
> #9  0x006169e4 in ExecFunctionScan (node=node@entry=0x2168210) at 
> /home/andres/src/postgresql/src/backend/executor/nodeFunctionscan.c:268
>
> the error happened just after I restarted a standby, so it's not
> unlikely to be related to hot_standby_feedback.

After some off-list discussion and debugging, Andres and I have
managed to identify three issues here (so far).  Two are issues in the
testing, and one is a data-corrupting bug in the freeze map code.

1. pg_check_visible keeps on using the same OldestXmin for all its
checks even though the real OldestXmin may advance in the meantime.
This can lead to spurious problem reports: pg_check_visible() thinks
that the tuple isn't all visible yet and reports it as corruption, but
in reality there's no problem.

2. pg_check_visible includes the same check for heap-xmin-committed
that vacuumlazy.c uses, but hint bits aren't crash safe, so this could
lead to a spurious trouble report in a scenario involving a crash.

3. vacuumlazy.c includes this code:

if (heap_prepare_freeze_tuple(tuple.t_data, FreezeLimit,
  MultiXactCutoff, [nfrozen]))
frozen[nfrozen++].offset = offnum;
else if (heap_tuple_needs_eventual_freeze(tuple.t_data))
all_frozen = false;

That's wrong, because a "true" return value from
heap_prepare_freeze_tuple() means only that it has done *some*
freezing work on the tuple, not that it's done all of the freezing
work that will ever need to be done.  So, if the tuple's xmin can be
frozen and is aborted but not older than vacuum_freeze_min_age, then
heap_prepare_freeze_tuple() won't free xmax, but the page will still
be marked all-frozen, which is bad.  I think it normally won't matter
because the xmax will probably be hinted invalid anyway, since we just
pruned the page which should have set hint bits everywhere, but if
those hint bits were lost then we'd eventually end up with an
accessible xmax pointing off into space.

My first thought was to just delete the "else" but that would be bad
because we'd fail to set all-frozen immediately in a lot of cases
where we should.  This needs a bit more thought than I have time to
give it right now.

(I will update on the status of this open item again no later than
Monday; probably sooner.)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: PL/Pythonu - function ereport

2016-06-10 Thread Pavel Stehule
2016-06-10 20:56 GMT+02:00 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com>:

> I noticed that this new feature in PL/Python gratuitously uses slightly
> different keyword names than the C and PL/pgSQL APIs, e.g. "schema" instead
> of "schema_name" etc.  I propose to fix that with the attached patch.
>
>
good idea

+1

Pavel


> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: [HACKERS] proposal: PL/Pythonu - function ereport

2016-06-10 Thread Peter Eisentraut
I noticed that this new feature in PL/Python gratuitously uses slightly 
different keyword names than the C and PL/pgSQL APIs, e.g. "schema" 
instead of "schema_name" etc.  I propose to fix that with the attached 
patch.


--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From 1622e74f695fbcee4d9ade1c8736001d3adc6b64 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Fri, 10 Jun 2016 14:38:20 -0400
Subject: [PATCH] PL/Python: Rename new keyword arguments of plpy.error() etc.

Rename schema -> schema_name etc. to remaining consistent with C API and
PL/pgSQL.
---
 doc/src/sgml/plpython.sgml|  6 +--
 src/pl/plpython/expected/plpython_ereport.out | 66 -
 src/pl/plpython/plpy_plpymodule.c | 70 +--
 src/pl/plpython/sql/plpython_ereport.sql  | 62 
 4 files changed, 102 insertions(+), 102 deletions(-)

diff --git a/doc/src/sgml/plpython.sgml b/doc/src/sgml/plpython.sgml
index cff66a2..abc11a9 100644
--- a/doc/src/sgml/plpython.sgml
+++ b/doc/src/sgml/plpython.sgml
@@ -1374,9 +1374,9 @@ Utility Functions
The following keyword-only arguments are accepted:

detail, hint,
-   sqlstate, schema,
-   table, column,
-   datatype , constraint
+   sqlstate, schema_name,
+   table_name, 
column_name,
+   datatype_name , 
constraint_name
.
The string representation of the objects passed as keyword-only arguments
is used to enrich the messages reported to the client. For example:
diff --git a/src/pl/plpython/expected/plpython_ereport.out 
b/src/pl/plpython/expected/plpython_ereport.out
index 8a6dfe4..e32b672 100644
--- a/src/pl/plpython/expected/plpython_ereport.out
+++ b/src/pl/plpython/expected/plpython_ereport.out
@@ -9,11 +9,11 @@ plpy.info('This is message text.',
 detail = 'This is detail text',
 hint = 'This is hint text.',
 sqlstate = 'XX000',
-schema = 'any info about schema',
-table = 'any info about table',
-column = 'any info about column',
-datatype = 'any info about datatype',
-constraint = 'any info about constraint')
+schema_name = 'any info about schema',
+table_name = 'any info about table',
+column_name = 'any info about column',
+datatype_name = 'any info about datatype',
+constraint_name = 'any info about constraint')
 plpy.notice('notice', detail = 'some detail')
 plpy.warning('warning', detail = 'some detail')
 plpy.error('stop on error', detail = 'some detail', hint = 'some hint')
@@ -70,12 +70,12 @@ CONTEXT:  PL/Python anonymous code block
 -- raise exception in python, handle exception in plgsql
 CREATE OR REPLACE FUNCTION raise_exception(_message text, _detail text DEFAULT 
NULL, _hint text DEFAULT NULL,
_sqlstate text DEFAULT NULL,
-   _schema text DEFAULT NULL, 
_table text DEFAULT NULL, _column text DEFAULT NULL,
-   _datatype text DEFAULT NULL, 
_constraint text DEFAULT NULL)
+   _schema_name text DEFAULT NULL, 
_table_name text DEFAULT NULL, _column_name text DEFAULT NULL,
+   _datatype_name text DEFAULT 
NULL, _constraint_name text DEFAULT NULL)
 RETURNS void AS $$
 kwargs = { "message":_message, "detail":_detail, "hint":_hint,
-   "sqlstate":_sqlstate, "schema":_schema, "table":_table,
-   "column":_column, "datatype":_datatype, 
"constraint":_constraint }
+   "sqlstate":_sqlstate, "schema_name":_schema_name, 
"table_name":_table_name,
+   "column_name":_column_name, 
"datatype_name":_datatype_name, "constraint_name":_constraint_name }
 # ignore None values - should work on Python2.3
 dict = {}
 for k in kwargs:
@@ -101,11 +101,11 @@ SELECT raise_exception(_message => 'message text',
_detail => 'detail text',
_hint => 'hint text',
_sqlstate => 'XX555',
-   _schema => 'schema text',
-   _table => 'table text',
-   _column => 'column text',
-   _datatype => 'datatype text',
-   _constraint => 'constraint 
text');
+   _schema_name => 'schema text',
+   _table_name => 

Re: [HACKERS] parallel workers and client encoding

2016-06-10 Thread Peter Eisentraut

On 6/6/16 9:45 PM, Peter Eisentraut wrote:

Attached is a patch to illustrates how this could be fixed.  There might
be similar issues elsewhere.  The notification propagation in particular
could be affected.


Tracing the code, NotificationResponse messages are converted to the 
client encoding during transmission from the worker to the leader and 
then sent on binarily to the client, so this should appear to work at 
the moment.  But it will break if we make a change like I suggested, 
namely changing the client encoding in the worker process to be the 
server encoding, because then nothing will transcode it before it 
reaches the client anymore.  So this will need a well-considered 
solution in concert with the error/notice issue.


Then again, it's not clear to me under what circumstances a parallel 
worker could or should be sending a NotificationResponse.


--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reviewing freeze map code

2016-06-10 Thread Andres Freund
On 2016-06-09 23:39:24 -0700, Andres Freund wrote:
> On 2016-06-10 11:58:26 +0530, Amit Kapila wrote:
> > I have tried in multiple ways by running pgbench with read-write tests, but
> > could not see any such behaviour.
> 
> It took over an hour of pgbench on a fast laptop till I saw it.
> 
> 
> > I have tried by even crashing and
> > restarting the server and then again running pgbench.  Do you see these
> > records on master or slave?
> 
> Master, but with an existing standby. So it could be related to
> hot_standby_feedback or such.

I just managed to trigger it again.


#1  0x7fa1a73778da in __GI_abort () at abort.c:89
#2  0x7f9f1395e59c in record_corrupt_item (items=items@entry=0x2137be0, 
tid=0x7f9fb8681c0c)
at /home/andres/src/postgresql/contrib/pg_visibility/pg_visibility.c:612
#3  0x7f9f1395ead5 in collect_corrupt_items (relid=relid@entry=29449, 
all_visible=all_visible@entry=0 '\000', all_frozen=all_frozen@entry=1 '\001')
at /home/andres/src/postgresql/contrib/pg_visibility/pg_visibility.c:572
#4  0x7f9f1395f476 in pg_check_frozen (fcinfo=0x7ffe5343a200) at 
/home/andres/src/postgresql/contrib/pg_visibility/pg_visibility.c:292
#5  0x005fdbec in ExecMakeTableFunctionResult (funcexpr=0x2168630, 
econtext=0x2168320, argContext=, expectedDesc=0x2168ef0, 
randomAccess=0 '\000') at 
/home/andres/src/postgresql/src/backend/executor/execQual.c:2211
#6  0x00616992 in FunctionNext (node=node@entry=0x2168210) at 
/home/andres/src/postgresql/src/backend/executor/nodeFunctionscan.c:94
#7  0x005ffdcb in ExecScanFetch (recheckMtd=0x6166f0 , 
accessMtd=0x616700 , node=0x2168210)
at /home/andres/src/postgresql/src/backend/executor/execScan.c:95
#8  ExecScan (node=node@entry=0x2168210, accessMtd=accessMtd@entry=0x616700 
, recheckMtd=recheckMtd@entry=0x6166f0 )
at /home/andres/src/postgresql/src/backend/executor/execScan.c:145
#9  0x006169e4 in ExecFunctionScan (node=node@entry=0x2168210) at 
/home/andres/src/postgresql/src/backend/executor/nodeFunctionscan.c:268

the error happened just after I restarted a standby, so it's not
unlikely to be related to hot_standby_feedback.


(gdb) p *tuple.t_data
$5 = {t_choice = {t_heap = {t_xmin = 9105470, t_xmax = 26049273, t_field3 = 
{t_cid = 0, t_xvac = 0}}, t_datum = {datum_len_ = 9105470, 
  datum_typmod = 26049273, datum_typeid = 0}}, t_ctid = {ip_blkid = {bi_hi 
= 1, bi_lo = 19765}, ip_posid = 3}, t_infomask2 = 4, t_infomask = 770, 
  t_hoff = 24 '\030', t_bits = 0x7f9fb8681c17 ""}

Infomask is:
#define HEAP_XMIN_COMMITTED 0x0100  /* t_xmin committed */
#define HEAP_XMIN_INVALID   0x0200  /* t_xmin invalid/aborted */
#define HEAP_XMIN_FROZEN(HEAP_XMIN_COMMITTED|HEAP_XMIN_INVALID)
#define HEAP_HASVARWIDTH0x0002  /* has variable-width 
attribute(s) */

This indeed looks borked.  Such a tuple should never survive
if (check_frozen && !VM_ALL_FROZEN(rel, blkno, ))
check_frozen = false;
especially not when
(gdb) p PageIsAllVisible(page)
$3 = 4

(fwiw, checking PD_ALL_VISIBLE in those functions sounds like a good plan)


I've got another earlier case (that I somehow missed seeing), below
check_visible:

(gdb) p *tuple->t_data 
$2 = {t_choice = {t_heap = {t_xmin = 13616549, t_xmax = 25210801, t_field3 = 
{t_cid = 0, t_xvac = 0}}, t_datum = {datum_len_ = 13616549, 
  datum_typmod = 25210801, datum_typeid = 0}}, t_ctid = {ip_blkid = {bi_hi 
= 0, bi_lo = 52320}, ip_posid = 67}, t_infomask2 = 32772, t_infomask = 8962, 
  t_hoff = 24 '\030', t_bits = 0x7f9fda2f8717 ""}

infomask is:
#define HEAP_UPDATED0x2000  /* this is UPDATEd version of 
row */
#define HEAP_XMIN_COMMITTED 0x0100  /* t_xmin committed */
#define HEAP_XMIN_INVALID   0x0200  /* t_xmin invalid/aborted */
#define HEAP_HASVARWIDTH0x0002  /* has variable-width 
attribute(s) */
infomask2 is:
#define HEAP_ONLY_TUPLE 0x8000  /* this is heap-only tuple */

I'll run again, with a debugger attached, maybe I can get some more
information.


Regards,

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallel.c is not marked as test covered

2016-06-10 Thread Peter Eisentraut

On 6/7/16 11:43 PM, Noah Misch wrote:

I changed this to keep the main message while overwriting the CONTEXT; a bug
in this area could very well produce some other error rather than no error.


Regarding the patch that ended up being committed, I wonder if it is 
intentional that PL/pgSQL overwrites the context from the parallel 
worker.  Shouldn't the context effectively look like


ERROR:  message
CONTEXT:  parallel worker
CONTEXT:  PL/pgSQL function

Elsewhere in this thread I suggested getting rid of the parallel worker 
context by default (except for debugging), but if we do want to keep it, 
then it seems to be a bug that a PL/pgSQL function can just eliminate it.


--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Don't generate parallel paths for rels with parallel-restricted

2016-06-10 Thread Robert Haas
On Thu, Jun 9, 2016 at 8:17 PM, Tom Lane  wrote:
> I wrote:
>> Robert Haas  writes:
>>> There's one other related thing I'm concerned about, which is that the
>>> code in namespace.c that manages pg_temp doesn't know anything about
>>> parallelism. So it will interpret pg_temp to mean the pg_temp_NNN
>>> schema for its own backend ID rather than the leader's backend ID.
>>> I'm not sure that's a problem, but I haven't thought deeply about it.
>
>> Hmmm.  I'll take a look.
>
> Yeah, that was pretty broken, but I believe I've fixed it.

Thanks.  Looks reasonable on a quick once-over.

For the record, I think much of what constitutes "broken" is arbitrary
here - anything that doesn't work can be labelled "well, that's not
supported under parallel query, label your function
parallel-restricted or parallel-unsafe".  And I think we're going to
have to take exactly that approach in many cases.  I have no illusions
that the current infrastructure covers everything that users will want
to do, and I think we'll be uncovering and removing limitations of
this sort for a long time.  But clearly the more state we synchronize,
the happier users will be.  I probably should have done something like
what you did here sooner, but I didn't think it could be done that
simply.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Don't generate parallel paths for rels with parallel-restricted

2016-06-10 Thread Robert Haas
On Fri, Jun 10, 2016 at 10:50 AM, Tom Lane  wrote:
> Amit Kapila  writes:
>> On Thu, Jun 9, 2016 at 11:38 PM, Tom Lane  wrote:
>>> Robert Haas  writes:
 Could you answer my question about whether adjust_appendrel_attrs()
 might translate Vars into non-Vars?
>
>>> Yes, absolutely.
>
>> Isn't this true only for UNION ALL cases and not for inheritance child
>> relations (at least that is what seems to be mentioned in comments
>> for translated_vars in AppendRelInfo)?
>
> Correct.
>
>> If that is right, then I think
>> there shouldn't be a problem w.r.t parallel plans even if
>> adjust_appendrel_attrs() translate Vars into non-Vars, because for UNION
>> ALL it seems parent rels rtekind is RTE_SUBQUERY and we consider such rels
>> as parallel unsafe (see set_rel_consider_parallel()).
>
> Hm, that would explain why you haven't been able to exhibit a bug on HEAD
> as it stands; but it doesn't make the code any less broken on its own
> terms, or any less of a risk for future bugs when we try to relax the
> no-subqueries restriction.
>
> I am still of the opinion that reltarget_has_non_vars is simply a bad
> idea.  It's wrong as-committed, it will be a permanent maintenance hazard,
> and there is no evidence that it saves anything meaningful even as the
> code stood yesterday, let alone after cae1c788b.  I think we should just
> remove it and allow those has_parallel_hazard calls to occur
> unconditionally, and will go do that unless I hear some really good
> argument to the contrary.

OK.  We can always revisit it another time if need be.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Online DW

2016-06-10 Thread David G. Johnston
On Fri, Jun 10, 2016 at 4:11 AM, Sridhar N Bamandlapally <
sridhar@gmail.com> wrote:

> Hi
>
> Is there any feature in PostgreSQL where online DW (Dataware housing) is
> possible ?
>
> am looking for scenario like
>
> 1. Production DB will have CURRENT + LAST 7 DAYS data only
>
> 2. Archive/DW DB will have CURRENT + COMPLETE HISTORY
>
> expecting something like streaming, but not ETL
>
>
​The entire DB couldn't operate this way since not every record has a
concept of time and if you use any kind of physical time you are going to
have issues as well.

First impression is you want to horizontally partition your "time-impacted"
tables so that each partition contains only data having the same ISO Week
number in the same ISO Year.

Remove older tables from the inheritance and stick them on a separate
tablespace and/or stream them to another database.

As has been mentioned there are various tools out there today that can
likely be used to fulfill whatever fundamental need you have.  "Not ETL" is
not a need though, its at best a "nice-to-have" unless you are willing to
forgo any solution to your larger problem just because the implementation
is not optimal.

Unless you define your true goals and constraints its going to be hard to
make recommendations.

David J.


Re: [HACKERS] Re: [COMMITTERS] pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold

2016-06-10 Thread Kevin Grittner
On Fri, Jun 10, 2016 at 10:26 AM, Robert Haas  wrote:
> On Fri, Jun 10, 2016 at 10:45 AM, Kevin Grittner  wrote:

>> Since vacuum calls the pruning function, and not the other way
>> around, the name you suggest would be technically more correct.
>> Committed using "Pruning" instead of "Vacuum" in both new macros.

> You've still got an early_vacuum_enabled variable name floating around there.

Gah!  Renamed for consistency.

Thanks!

-- 
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hard to maintain duplication in contain_volatile_functions_not_nextval_walker

2016-06-10 Thread Robert Haas
On Fri, Jun 10, 2016 at 11:12 AM, Tom Lane  wrote:
> Andres Freund  writes:
>> contain_volatile_functions_walker is duplicated, near entirely, in
>> contain_volatile_functions_not_nextval_walker.
>> Wouldn't it have been better not to duplicate, and keep a flag about
>> ignoring nextval in the context variable?
>> While at it, couldn't we also fold contain_mutable_functions_walker()
>> together using a similar technique?
>
> I had what might be a better idea about refactoring in this area.
> Most of the bulk of contain_volatile_functions and friends consists
> of knowing how to locate the function OIDs, if any, embedded in a given
> expression node.  I am envisioning a helper function that looks like
>
> typedef bool (*check_func) (Oid func_oid, void *context);
>
> bool
> check_functions_in_node(Node *node, check_func checker, void *context)
> {
> if (IsA(node, FuncExpr))
> {
> FuncExpr   *expr = (FuncExpr *) node;
>
> if (checker(expr->funcid, context))
> return true;
> }
> else if (IsA(node, OpExpr))
> {
> OpExpr   *expr = (OpExpr *) node;
>
> set_opfuncid(expr);
> if (checker(expr->opfuncid, context))
> return true;
> }
> ...
> return false;
> }
>
> and then for example contain_volatile_functions_walker would reduce to
>
> static bool
> contain_volatile_functions_checker(Oid func_oid, void *context)
> {
> return (func_volatile(func_oid) == PROVOLATILE_VOLATILE);
> }
>
> static bool
> contain_volatile_functions_walker(Node *node, void *context)
> {
> if (node == NULL)
> return false;
> if (check_functions_in_node(node, contain_volatile_functions_checker,
> context))
> return true;
> if (IsA(node, Query))
> {
> /* Recurse into subselects */
> return query_tree_walker((Query *) node,
>  contain_volatile_functions_walker,
>  context, 0);
> }
> return expression_tree_walker(node, contain_volatile_functions_walker,
>   context);
> }
>
> Note that the helper function doesn't recurse into child nodes, it only
> examines the given node.  This is because some of the potential callers
> have additional checks that they need to apply, so it's better if the
> call site retains control of the recursion.
>
> By my count there are half a dozen places in clauses.c that could make use
> of this, at a savings of about 80 lines each, as well as substantial
> removal of risks-of-omission.  There might be use-cases elsewhere, so
> I'm rather inclined to put the checker function in nodeFuncs.c.
>
> Thoughts?

I like it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold

2016-06-10 Thread Robert Haas
On Fri, Jun 10, 2016 at 10:45 AM, Kevin Grittner  wrote:
> On Thu, Jun 9, 2016 at 6:16 PM, Robert Haas  wrote:
>
>> So I like the idea of centralizing checks in
>> RelationAllowsEarlyVacuum, but shouldn't it really be called
>> RelationAllowsEarlyPruning?
>
> Since vacuum calls the pruning function, and not the other way
> around, the name you suggest would be technically more correct.
> Committed using "Pruning" instead of "Vacuum" in both new macros.
>
> I have closed the CREATE INDEX versus "snapshot too old" issue in
> the "PostgreSQL 9.6 Open Items" Wiki page.

You've still got an early_vacuum_enabled variable name floating around there.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hard to maintain duplication in contain_volatile_functions_not_nextval_walker

2016-06-10 Thread Tom Lane
Andres Freund  writes:
> contain_volatile_functions_walker is duplicated, near entirely, in
> contain_volatile_functions_not_nextval_walker.
> Wouldn't it have been better not to duplicate, and keep a flag about
> ignoring nextval in the context variable?
> While at it, couldn't we also fold contain_mutable_functions_walker()
> together using a similar technique?

I had what might be a better idea about refactoring in this area.
Most of the bulk of contain_volatile_functions and friends consists
of knowing how to locate the function OIDs, if any, embedded in a given
expression node.  I am envisioning a helper function that looks like

typedef bool (*check_func) (Oid func_oid, void *context);

bool
check_functions_in_node(Node *node, check_func checker, void *context)
{
if (IsA(node, FuncExpr))
{
FuncExpr   *expr = (FuncExpr *) node;

if (checker(expr->funcid, context))
return true;
}
else if (IsA(node, OpExpr))
{
OpExpr   *expr = (OpExpr *) node;

set_opfuncid(expr);
if (checker(expr->opfuncid, context))
return true;
}
...
return false;
}

and then for example contain_volatile_functions_walker would reduce to

static bool
contain_volatile_functions_checker(Oid func_oid, void *context)
{
return (func_volatile(func_oid) == PROVOLATILE_VOLATILE);
}

static bool
contain_volatile_functions_walker(Node *node, void *context)
{
if (node == NULL)
return false;
if (check_functions_in_node(node, contain_volatile_functions_checker,
context))
return true;
if (IsA(node, Query))
{
/* Recurse into subselects */
return query_tree_walker((Query *) node,
 contain_volatile_functions_walker,
 context, 0);
}
return expression_tree_walker(node, contain_volatile_functions_walker,
  context);
}

Note that the helper function doesn't recurse into child nodes, it only
examines the given node.  This is because some of the potential callers
have additional checks that they need to apply, so it's better if the
call site retains control of the recursion.

By my count there are half a dozen places in clauses.c that could make use
of this, at a savings of about 80 lines each, as well as substantial
removal of risks-of-omission.  There might be use-cases elsewhere, so
I'm rather inclined to put the checker function in nodeFuncs.c.

Thoughts?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Don't generate parallel paths for rels with parallel-restricted

2016-06-10 Thread Tom Lane
Amit Kapila  writes:
> On Thu, Jun 9, 2016 at 11:38 PM, Tom Lane  wrote:
>> Robert Haas  writes:
>>> Could you answer my question about whether adjust_appendrel_attrs()
>>> might translate Vars into non-Vars?

>> Yes, absolutely.

> Isn't this true only for UNION ALL cases and not for inheritance child
> relations (at least that is what seems to be mentioned in comments
> for translated_vars in AppendRelInfo)?

Correct.

> If that is right, then I think
> there shouldn't be a problem w.r.t parallel plans even if
> adjust_appendrel_attrs() translate Vars into non-Vars, because for UNION
> ALL it seems parent rels rtekind is RTE_SUBQUERY and we consider such rels
> as parallel unsafe (see set_rel_consider_parallel()).

Hm, that would explain why you haven't been able to exhibit a bug on HEAD
as it stands; but it doesn't make the code any less broken on its own
terms, or any less of a risk for future bugs when we try to relax the
no-subqueries restriction.

I am still of the opinion that reltarget_has_non_vars is simply a bad
idea.  It's wrong as-committed, it will be a permanent maintenance hazard,
and there is no evidence that it saves anything meaningful even as the
code stood yesterday, let alone after cae1c788b.  I think we should just
remove it and allow those has_parallel_hazard calls to occur
unconditionally, and will go do that unless I hear some really good
argument to the contrary.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold

2016-06-10 Thread Kevin Grittner
On Thu, Jun 9, 2016 at 6:16 PM, Robert Haas  wrote:

> So I like the idea of centralizing checks in
> RelationAllowsEarlyVacuum, but shouldn't it really be called
> RelationAllowsEarlyPruning?

Since vacuum calls the pruning function, and not the other way
around, the name you suggest would be technically more correct.
Committed using "Pruning" instead of "Vacuum" in both new macros.

I have closed the CREATE INDEX versus "snapshot too old" issue in
the "PostgreSQL 9.6 Open Items" Wiki page.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reviewing freeze map code

2016-06-10 Thread Amit Kapila
On Fri, Jun 10, 2016 at 12:09 PM, Andres Freund  wrote:
>
> On 2016-06-10 11:58:26 +0530, Amit Kapila wrote:
>
>
> > While looking at code in this area, I observed that during replay of
> > records (heap_xlog_delete), we first clear the vm, then update the page.
> > So we don't have Buffer lock while updating the vm where as in the patch
> > (collect_corrupt_items()), we are relying on the fact that for clearing
vm
> > bit one needs to acquire buffer lock.  Can that cause a problem?
>
> Unsetting a vm bit is always safe, right?
>

I think so, which means this should not be a problem area.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] [COMMITTERS] pgsql: Don't generate parallel paths for rels with parallel-restricted

2016-06-10 Thread Amit Kapila
On Thu, Jun 9, 2016 at 11:38 PM, Tom Lane  wrote:
>
> Robert Haas  writes:
>
> > Could you answer my question about whether adjust_appendrel_attrs()
> > might translate Vars into non-Vars?
>
> Yes, absolutely.

Isn't this true only for UNION ALL cases and not for inheritance child
relations (at least that is what seems to be mentioned in comments
for translated_vars in AppendRelInfo)?   If that is right, then I think
there shouldn't be a problem w.r.t parallel plans even if
adjust_appendrel_attrs() translate Vars into non-Vars, because for UNION
ALL it seems parent rels rtekind is RTE_SUBQUERY and we consider such rels
as parallel unsafe (see set_rel_consider_parallel()).  So it doesn't matter
even if child rels target list contains any parallel unsafe expression, as
we are not going to create parallel paths for such relations.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Parallel safety tagging of extension functions

2016-06-10 Thread Andreas Karlsson

On 06/10/2016 01:44 PM, Andreas Karlsson wrote:

I have attached a patch which adds the shcema, plus an updated patch for
tseach2.


Forgot adding schema to the tables. Here are new versions.

Andreas


parallel-contrib-v4-tsearch2.patch.gz
Description: application/gzip
diff --git a/contrib/citext/citext--1.1--1.2.sql b/contrib/citext/citext--1.1--1.2.sql
index 60dd15b..a6a30e9 100644
--- a/contrib/citext/citext--1.1--1.2.sql
+++ b/contrib/citext/citext--1.1--1.2.sql
@@ -41,14 +41,14 @@ ALTER FUNCTION replace(citext, citext, citext) PARALLEL SAFE;
 ALTER FUNCTION split_part(citext, citext, int) PARALLEL SAFE;
 ALTER FUNCTION translate(citext, citext, text) PARALLEL SAFE;
 
-UPDATE pg_proc SET proparallel = 's'
-WHERE oid = 'min(citext)'::regprocedure;
+UPDATE pg_catalog.pg_proc SET proparallel = 's'
+WHERE oid = 'min(citext)'::pg_catalog.regprocedure;
 
-UPDATE pg_proc SET proparallel = 's'
-WHERE oid = 'max(citext)'::regprocedure;
+UPDATE pg_catalog.pg_proc SET proparallel = 's'
+WHERE oid = 'max(citext)'::pg_catalog.regprocedure;
 
-UPDATE pg_aggregate SET aggcombinefn = 'citext_smaller'
-WHERE aggfnoid = 'max(citext)'::regprocedure;
+UPDATE pg_catalog.pg_aggregate SET aggcombinefn = 'citext_smaller'
+WHERE aggfnoid = 'max(citext)'::pg_catalog.regprocedure;
 
-UPDATE pg_aggregate SET aggcombinefn = 'citext_larger'
-WHERE aggfnoid = 'max(citext)'::regprocedure;
+UPDATE pg_catalog.pg_aggregate SET aggcombinefn = 'citext_larger'
+WHERE aggfnoid = 'max(citext)'::pg_catalog.regprocedure;
diff --git a/contrib/intagg/intagg--1.0--1.1.sql b/contrib/intagg/intagg--1.0--1.1.sql
index 2ec95b6..d9a8705 100644
--- a/contrib/intagg/intagg--1.0--1.1.sql
+++ b/contrib/intagg/intagg--1.0--1.1.sql
@@ -7,5 +7,5 @@ ALTER FUNCTION int_agg_state(internal, int4) PARALLEL SAFE;
 ALTER FUNCTION int_agg_final_array(internal) PARALLEL SAFE;
 ALTER FUNCTION int_array_enum(int4[]) PARALLEL SAFE;
 
-UPDATE pg_proc SET proparallel = 's'
-WHERE oid = 'int_array_aggregate(int4)'::regprocedure;
+UPDATE pg_catalog.pg_proc SET proparallel = 's'
+WHERE oid = 'int_array_aggregate(int4)'::pg_catalog.regprocedure;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel safety tagging of extension functions

2016-06-10 Thread Andreas Karlsson

On 06/09/2016 10:48 PM, Tom Lane wrote:

Robert Haas  writes:

On Sat, May 21, 2016 at 11:45 AM, Tom Lane  wrote:

Yes, let's fix it.  This will also take care of the questions about
whether the GIN/GIST opclass tweaks I made a few months ago require
module version bumps.



Tom, there's a patch for this at
https://www.postgresql.org/message-id/574f091a.3050...@proxel.se which
I think you should review, since you were the one who made the tweaks
involved.  Any chance you can do that RSN?


I've pushed this with some revisions to make the queries more
search-path-safe.  I'm not too happy with the safety of the queries
I see already present from the previous patches.  I think stuff
like this:

UPDATE pg_proc SET proparallel = 's'
WHERE oid = 'min(citext)'::regprocedure;

needs to be more like

UPDATE pg_catalog.pg_proc SET proparallel = 's'
WHERE oid = 'min(citext)'::pg_catalog.regprocedure;


Good point. While I believe that we can trust that ALTER EXTENSION 
handles the search path for the functions of the extension we should 
qualify things in pg_catalog.


I have attached a patch which adds the shcema, plus an updated patch for 
tseach2.


Andreas


parallel-contrib-v3-tsearch2.patch.gz
Description: application/gzip
diff --git a/contrib/citext/citext--1.1--1.2.sql b/contrib/citext/citext--1.1--1.2.sql
index 60dd15b..4f0e4bc 100644
--- a/contrib/citext/citext--1.1--1.2.sql
+++ b/contrib/citext/citext--1.1--1.2.sql
@@ -42,13 +42,13 @@ ALTER FUNCTION split_part(citext, citext, int) PARALLEL SAFE;
 ALTER FUNCTION translate(citext, citext, text) PARALLEL SAFE;
 
 UPDATE pg_proc SET proparallel = 's'
-WHERE oid = 'min(citext)'::regprocedure;
+WHERE oid = 'min(citext)'::pg_catalog.regprocedure;
 
 UPDATE pg_proc SET proparallel = 's'
-WHERE oid = 'max(citext)'::regprocedure;
+WHERE oid = 'max(citext)'::pg_catalog.regprocedure;
 
 UPDATE pg_aggregate SET aggcombinefn = 'citext_smaller'
-WHERE aggfnoid = 'max(citext)'::regprocedure;
+WHERE aggfnoid = 'max(citext)'::pg_catalog.regprocedure;
 
 UPDATE pg_aggregate SET aggcombinefn = 'citext_larger'
-WHERE aggfnoid = 'max(citext)'::regprocedure;
+WHERE aggfnoid = 'max(citext)'::pg_catalog.regprocedure;
diff --git a/contrib/intagg/intagg--1.0--1.1.sql b/contrib/intagg/intagg--1.0--1.1.sql
index 2ec95b6..b2a2820 100644
--- a/contrib/intagg/intagg--1.0--1.1.sql
+++ b/contrib/intagg/intagg--1.0--1.1.sql
@@ -8,4 +8,4 @@ ALTER FUNCTION int_agg_final_array(internal) PARALLEL SAFE;
 ALTER FUNCTION int_array_enum(int4[]) PARALLEL SAFE;
 
 UPDATE pg_proc SET proparallel = 's'
-WHERE oid = 'int_array_aggregate(int4)'::regprocedure;
+WHERE oid = 'int_array_aggregate(int4)'::pg_catalog.regprocedure;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Online DW

2016-06-10 Thread Sridhar N Bamandlapally
One thing looks possible ( feature not available), just an idea

example/syntax:

BEGIN NOARCHIVE;

  --- transaction-1
  --- transaction-2
  .
  .
  --- transaction-N

END;

This/These will be performed in Production to clean-up archive which will
not be sync with Archive/DW DB only

one heads-up is Archive/DW DB may need to build WITHOUT CONSTRAINTS

May need to introduce ARCHIVE system/tag in pg_hba.conf

Thanks
Sridhar
OpenText














On Fri, Jun 10, 2016 at 2:22 PM, Craig Ringer  wrote:

> On 10 June 2016 at 16:11, Sridhar N Bamandlapally 
> wrote:
>
>> Hi
>>
>> Is there any feature in PostgreSQL where online DW (Dataware housing) is
>> possible ?
>>
>> am looking for scenario like
>>
>> 1. Production DB will have CURRENT + LAST 7 DAYS data only
>>
>> 2. Archive/DW DB will have CURRENT + COMPLETE HISTORY
>>
>> expecting something like streaming, but not ETL
>>
>
> There's nothing built-in, but that's exactly the sort of thing pglogical
> is intended for. You can also build something along those lines with
> Londiste fairly easily.
>
> Hopefully this is the sort of thing we can move toward with built-in
> logical replication in coming releases.
>
> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


Re: [HACKERS] [sqlsmith] Failed assertion in postgres_fdw/deparse.c:1116

2016-06-10 Thread Amit Langote
On 2016/06/10 2:07, Robert Haas wrote:
> On Thu, Jun 9, 2016 at 5:50 AM, Amit Langote wrote:
>> I adjusted some comments per off-list suggestion from Ashutosh.  Please
>> find attached the new version.
> 
> Are PlaceHolderVars the only problem we need to worry about here?

It seems so, as far as postgres_fdw join push-down logic is concerned.

> What about other expressions that creep into the target list during
> subquery pull-up which are not safe to push down?  See comments in
> set_append_rel_size(), recent discussion on the thread "Failed
> assertion in parallel worker (ExecInitSubPlan)", and commit
> b12fd41c695b43c76b0a9a4d19ba43b05536440c.

I went through the discussion on that thread.  I see at least some
difference between how those considerations affect parallel-safety and
postgres_fdw join push-down safety.  While parallelism is considered for
append child rels requiring guards discussed on that thread, the same does
not seem to hold for the join push-down case.  The latter would fail the
safety check much earlier on the grounds of one of the component rels not
being pushdown_safe.  That's because the failing component rel would be
append parent rel (not in its role as append child) so would not have any
foreign path to begin with.  Any hazards introduced by subquery pull-up
then become irrelevant.  That's the case when we have an inheritance tree
of foreign tables (headed by a foreign table).  The other case (union all
with subquery pull-up) does not even get that far.

So the only case this fix should account for is where we have a single
foreign table entering a potential foreign join after being pulled up from
a subquery with unsafe PHVs being created that are referenced above the
join.  If a given push-down attempt reaches as far as the check introduced
by the proposed patch, we can be sure that there are no other unsafe
expressions to worry about (accounted for by is_foreign_expr() checks up
to that point).

Am I missing something?

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Online DW

2016-06-10 Thread Craig Ringer
On 10 June 2016 at 16:11, Sridhar N Bamandlapally 
wrote:

> Hi
>
> Is there any feature in PostgreSQL where online DW (Dataware housing) is
> possible ?
>
> am looking for scenario like
>
> 1. Production DB will have CURRENT + LAST 7 DAYS data only
>
> 2. Archive/DW DB will have CURRENT + COMPLETE HISTORY
>
> expecting something like streaming, but not ETL
>

There's nothing built-in, but that's exactly the sort of thing pglogical is
intended for. You can also build something along those lines with Londiste
fairly easily.

Hopefully this is the sort of thing we can move toward with built-in
logical replication in coming releases.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Reviewing freeze map code

2016-06-10 Thread Masahiko Sawada
On Fri, Jun 10, 2016 at 1:11 AM, Robert Haas  wrote:
> On Thu, Jun 9, 2016 at 5:48 AM, Amit Kapila  wrote:
>> Attached patch implements the above 2 functions.  I have addressed the
>> comments by Sawada San and you in latest patch and updated the documentation
>> as well.
>
> I made a number of changes to this patch.  Here is the new version.
>
> 1. The algorithm you were using for growing the array size is unsafe
> and can easily overrun the array.  Suppose that each of the first two
> pages have some corrupt tuples, more than 50% of MaxHeapTuplesPerPage
> but less than the full value of MaxTuplesPerPage.  Your code will
> conclude that the array does need to be enlarged after processing the
> first page.  I switched this to what I consider the normal coding
> pattern for such problems.
>
> 2. The all-visible checks seemed to me to be incorrect and incomplete.
> I made the check match the logic in lazy_scan_heap.
>
> 3. Your 1.0 -> 1.1 upgrade script was missing copies of the REVOKE
> statements you added to the 1.1 script.  I added them.
>
> 4. The tests as written were not safe under concurrency; they could
> return spurious results if the page changed between the time you
> checked the visibility map and the time you actually examined the
> tuples.  I think people will try running these functions on live
> systems, so I changed the code to recheck the VM bits after locking
> the page.  Unfortunately, there's either still a concurrency-related
> problem here or there's a bug in the all-frozen code itself because I
> once managed to get pg_check_frozen('pgbench_accounts') to return a
> TID while pgbench was running concurrently.  That's a bit alarming,
> but since I can't reproduce it I don't really have a clue how to track
> down the problem.
>
> 5. I made various cosmetic improvements.
>
> If there are not objections, I will go ahead and commit this tomorrow,
> because even if there is a bug (see point #4 above) I think it's
> better to have this in the tree than not.  However, code review and/or
> testing with these new functions seems like it would be an extremely
> good idea.
>

Thank you for working on this.
Here are some minor comments.

---
+/*
+ * Return the TIDs of not-all-visible tuples in pages marked all-visible

If there is even one non-visible tuple in pages marked all-visible,
the database might be corrupted.
Is it better "not-visible" or "non-visible" instead of "not-all-visible"?
---
Do we need to check page header flag?
I think that database also might be corrupt in case where there is
non-visible tuple in page set PD_ALL_VISIBLE.
We could emit the WARNING log in such case.

Also, using attached tool which allows us to set spurious visibility
map status without actual modifying the tuple , I manually made the
some situations where database is corrupted and tested it, but ISTM
that this tool works fine.
It doesn't mean proposing as a new feature of course, but please use
it as appropriate.

Regards,

--
Masahiko Sawada


corrupted_test.sql
Description: Binary data


set_spurious_vm_status.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUG] pg_basebackup from disconnected standby fails

2016-06-10 Thread Michael Paquier
On Thu, Jun 9, 2016 at 9:55 PM, Kyotaro HORIGUCHI
 wrote:
> Hello, I found that pg_basebackup from a replication standby
> fails after the following steps, on 9.3 and the master.
>
> - start a replication master
> - start a replication standby
> - stop the master in the mode other than immediate.
>
> pg_basebackup to the standby will fail with the following error.
>
>> pg_basebackup: could not get transaction log end position from server:
>> ERROR:  could not find any WAL files

Indeed, and you could just do the following to reproduce the failure
with the recovery test suite, so I would suggest adding this test in
the patch:
--- a/src/test/recovery/t/001_stream_rep.pl
+++ b/src/test/recovery/t/001_stream_rep.pl
@@ -24,6 +24,11 @@ $node_standby_1->start;
 # pg_basebackup works on a standby).
 $node_standby_1->backup($backup_name);

+# Take a second backup of the standby while the master is offline.
+$node_master->stop;
+$node_standby_1->backup('my_backup_2');
+$node_master->start;
+

> After looking more closely, I found that the minRecoveryPoint
> tends to be too small as the backup end point, and up to the
> record at the lastReplayedRecPtr can affect the pages on disk and
> they can go into the backup just taken.
>
> My conclusion here is that do_pg_stop_backup should return
> lastReplayedRecPtr, not minRecoveryPoint.

I have been thinking quite a bit about this patch, and this logic
sounds quite right to me. When stopping the backup we need to let the
user know up to which point it needs to replay WAL, and relation pages
are touched up to lastReplayedEndRecPtr. This LSN could be greater
than the minimum recovery point as there is no record to mark the end
of the backup, and pg_control has normally, well surely, being backup
up last but that's not a new problem it would as well have been backup
up before the minimum recovery point has been reached...

Still, perhaps we could improve the documentation regarding that? Say
it is recommended to enforce the minimum recovery point in pg_control
to the stop backup LSN to ensure that the backup recovers to a
consistent state when taking a backup from a standby.

I am attaching an updated patch with the test btw.
-- 
Michael


backup-standby-v2.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Online DW

2016-06-10 Thread Sridhar N Bamandlapally
Hi

Is there any feature in PostgreSQL where online DW (Dataware housing) is
possible ?

am looking for scenario like

1. Production DB will have CURRENT + LAST 7 DAYS data only

2. Archive/DW DB will have CURRENT + COMPLETE HISTORY

expecting something like streaming, but not ETL

Thanks
Sridhar


[HACKERS] Re: Should phraseto_tsquery('simple', 'blue blue') @@ to_tsvector('simple', 'blue') be true ?

2016-06-10 Thread Noah Misch
On Tue, Jun 07, 2016 at 06:05:10PM -0400, Tom Lane wrote:
> Jean-Pierre Pelletier  writes:
> > I wanted to test if phraseto_tsquery(), new with 9.6 could be used for
> > matching consecutive words but it won't work for us if it cannot handle
> > consecutive *duplicate* words.
> 
> > For example, the following returns true:select
> > phraseto_tsquery('simple', 'blue blue') @@ to_tsvector('simple', 'blue');
> 
> > Is this expected ?
> 
> I concur that that seems like a rather useless behavior.  If we have
> "x <-> y" it is not possible to match at distance zero, while if we
> have "x <-> x" it seems unlikely that the user is expecting us to
> treat that identically to "x".  So phrase search simply should not
> consider distance-zero matches.

[Action required within 72 hours.  This is a generic notification.]

The above-described topic is currently a PostgreSQL 9.6 open item.  Teodor,
since you committed the patch believed to have created it, you own this open
item.  If some other commit is more relevant or if this does not belong as a
9.6 open item, please let us know.  Otherwise, please observe the policy on
open item ownership[1] and send a status update within 72 hours of this
message.  Include a date for your subsequent status update.  Testers may
discover new open items at any time, and I want to plan to get them all fixed
well in advance of shipping 9.6rc1.  Consequently, I will appreciate your
efforts toward speedy resolution.  Thanks.

[1] 
http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reviewing freeze map code

2016-06-10 Thread Amit Kapila
On Thu, Jun 9, 2016 at 9:41 PM, Robert Haas  wrote:
>
>
>
> 2. The all-visible checks seemed to me to be incorrect and incomplete.
> I made the check match the logic in lazy_scan_heap.
>

Okay, I thought we just want to check for dead-tuples.  If we want the
logic similar to lazy_scan_heap(), then I think we should also consider
applying snapshot old threshold limit to oldestxmin.   We currently do that
in vacuum_set_xid_limits() for Vacuum.  Is there a reason for not
considering it for visibility check function?


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Reviewing freeze map code

2016-06-10 Thread Andres Freund
On 2016-06-10 11:58:26 +0530, Amit Kapila wrote:
> I have tried in multiple ways by running pgbench with read-write tests, but
> could not see any such behaviour.

It took over an hour of pgbench on a fast laptop till I saw it.


> I have tried by even crashing and
> restarting the server and then again running pgbench.  Do you see these
> records on master or slave?

Master, but with an existing standby. So it could be related to
hot_standby_feedback or such.


> While looking at code in this area, I observed that during replay of
> records (heap_xlog_delete), we first clear the vm, then update the page.
> So we don't have Buffer lock while updating the vm where as in the patch
> (collect_corrupt_items()), we are relying on the fact that for clearing vm
> bit one needs to acquire buffer lock.  Can that cause a problem?

Unsetting a vm bit is always safe, right?  The invariant is that the VM
may never falsely say all_visible/frozen, but it's perfectly ok for a
page to be all_visible/frozen, without the VM bit being present.


Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reviewing freeze map code

2016-06-10 Thread Amit Kapila
On Fri, Jun 10, 2016 at 8:27 AM, Andres Freund  wrote:

>
>
> On June 9, 2016 7:46:06 PM PDT, Amit Kapila 
> wrote:
> >On Fri, Jun 10, 2016 at 8:08 AM, Andres Freund 
> >wrote:
> >
> >> On 2016-06-09 19:33:52 -0700, Andres Freund wrote:
> >> > I played with it for a while, and besides
> >> > finding intentionally caused corruption, it didn't flag anything
> >> > (besides crashing on a standby, as in 2)).
> >>
> >> Ugh. Just sends after I sent that email:
> >>
> >>oid|t_ctid
> >> --+--
> >>  pgbench_accounts | (889641,33)
> >>  pgbench_accounts | (893854,56)
> >>  pgbench_accounts | (924226,13)
> >>  pgbench_accounts | (1073457,51)
> >>  pgbench_accounts | (1084904,16)
> >>  pgbench_accounts | (996,26)
> >> (6 rows)
> >>
> >>  oid | t_ctid
> >> -+
> >> (0 rows)
> >>
> >>oid|t_ctid
> >> --+--
> >>  pgbench_accounts | (739198,13)
> >>  pgbench_accounts | (887254,11)
> >>  pgbench_accounts | (1050391,6)
> >>  pgbench_accounts | (1158640,46)
> >>  pgbench_accounts | (1238067,18)
> >>  pgbench_accounts | (1273282,22)
> >>  pgbench_accounts | (1355816,54)
> >>  pgbench_accounts | (1361880,33)
> >> (8 rows)
> >>
> >>
> >Is this output of pg_check_visible()  or pg_check_frozen()?
>
> Unfortunately I don't know. I was running a union of both, I didn't really
> expect to hit an issue... I guess I'll put a PANIC in the relevant places
> and check whether I cab reproduce.
>
>

I have tried in multiple ways by running pgbench with read-write tests, but
could not see any such behaviour.  I have tried by even crashing and
restarting the server and then again running pgbench.  Do you see these
records on master or slave?

While looking at code in this area, I observed that during replay of
records (heap_xlog_delete), we first clear the vm, then update the page.
So we don't have Buffer lock while updating the vm where as in the patch
(collect_corrupt_items()), we are relying on the fact that for clearing vm
bit one needs to acquire buffer lock.  Can that cause a problem?

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com