Re: [HACKERS] Retiring from the Core Team

2017-01-11 Thread Devrim Gündüz

Hi,

On Wed, 2017-01-11 at 16:29 -0800, Josh Berkus wrote:
> You will have noticed that I haven't been very active for the past year.
>  My new work on Linux containers and Kubernetes has been even more
> absorbing than I anticipated, and I just haven't had a lot of time for
> PostgreSQL work.
> 
> For that reason, as of today, I am stepping down from the PostgreSQL
> Core Team.

This is sad news. Thank you for your great contributions to PostgreSQL, and I
with success in the future!

Regards,
-- 
Devrim Gündüz
EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR


signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] parallelize queries containing subplans

2017-01-11 Thread Dilip Kumar
On Thu, Jan 12, 2017 at 9:22 AM, Amit Kapila  wrote:
> Valid point, but I think we can avoid that by returning false after
> foreach(..) loop.  I think one improvement could be that instead of
> manually checking the parallel safety of each subplan, we can
> recursively call max_parallel_hazard_walker for each subplan.

I agree that this way we can avoid the problem what I mentioned.
>
>
>> But, more than that it will be cleaner to not
>> handle AlternativeSubPlan here unless there is some strong reason?
>>
>
> Yeah, the reason is to avoid unnecessary recursion.  Also, in all
> other places in the code, we do handle both of them separately, refer
> cost_qual_eval_walker for somewhat similar usage.

ok.



-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Retiring from the Core Team

2017-01-11 Thread Peter Geoghegan
On Wed, Jan 11, 2017 at 4:29 PM, Josh Berkus  wrote:
> You will have noticed that I haven't been very active for the past year.
>  My new work on Linux containers and Kubernetes has been even more
> absorbing than I anticipated, and I just haven't had a lot of time for
> PostgreSQL work.
>
> For that reason, as of today, I am stepping down from the PostgreSQL
> Core Team.

Thank you Josh!


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Rename pg_switch_xlog to pg_switch_wal

2017-01-11 Thread Fujii Masao
On Thu, Jan 12, 2017 at 2:49 PM, Michael Paquier
 wrote:
> On Thu, Jan 12, 2017 at 2:00 AM, Vladimir Rusinov  wrote:
>> On Tue, Jan 10, 2017 at 5:24 AM, Michael Paquier 
>> wrote:
>>> As there are two school of thoughts on this thread, keeping your patch
>>> with the compatibility table is the best move for now. Even if we end
>>> up by having a version without aliases, that will be just code to
>>> remove in the final version.
>>
>> Indeed, it is trivial to kill aliases.
>>
>> New version of the patch attached.
>
> The patch still updates a bunch of .po files. Those are normally
> refreshed with the translation updates, so they had better be removed.
> Other than that, the patch looks in good shape to me so switch to
> "Ready for committer".
>
> So, to sum up things on this thread, here are the votes about the use
> of aliases or a pure breakage:
> - No to aliases, shake the world: Stephen, Tom, David F, Vik, Bruce M => 5
> - Yes to aliases: Michael P, Andres, Peter E., Cynthia S, Jim N,
> Vladimir, Simon R, Fujii-san => 8
> If I misunderstood things, please feel free to speak up.

To be pricise, I'd like to avoid the renaming of the functions at all rather
than using aliases.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Rename pg_switch_xlog to pg_switch_wal

2017-01-11 Thread Michael Paquier
On Thu, Jan 12, 2017 at 2:00 AM, Vladimir Rusinov  wrote:
> On Tue, Jan 10, 2017 at 5:24 AM, Michael Paquier 
> wrote:
>> As there are two school of thoughts on this thread, keeping your patch
>> with the compatibility table is the best move for now. Even if we end
>> up by having a version without aliases, that will be just code to
>> remove in the final version.
>
> Indeed, it is trivial to kill aliases.
>
> New version of the patch attached.

The patch still updates a bunch of .po files. Those are normally
refreshed with the translation updates, so they had better be removed.
Other than that, the patch looks in good shape to me so switch to
"Ready for committer".

So, to sum up things on this thread, here are the votes about the use
of aliases or a pure breakage:
- No to aliases, shake the world: Stephen, Tom, David F, Vik, Bruce M => 5
- Yes to aliases: Michael P, Andres, Peter E., Cynthia S, Jim N,
Vladimir, Simon R, Fujii-san => 8
If I misunderstood things, please feel free to speak up.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for changes to recovery.conf API

2017-01-11 Thread Fujii Masao
On Wed, Jan 11, 2017 at 7:36 PM, Simon Riggs  wrote:
> On 11 January 2017 at 09:51, Fujii Masao  wrote:
>
>>> 5. recovery.conf parameters are all moved to postgresql.conf, with these 
>>> changes
>>
>> In current design of the patch, when recovery parameters are misconfigured
>> (e.g., set recovery_target_timeline to invalid timeline id) and
>> the configuration file is reloaded, the startup process emits FATAL error and
>> the server goes down. I don't think this is fine. Basically even
>> misconfiguration of the parameters should not cause the server crash.
>> If invalid settings are supplied, I think that we just should warn them
>> and ignore those new settings, like current other GUC is. Thought?
>
> Thanks for your comments.
>
> The specification of the recovery target parameters should be different, IMHO.
>
> If the user is performing a recovery and the target of the recovery is
> incorrectly specified then it is clear that the recovery cannot
> continue with an imprecisely specified target.

Could you tell me what case of "the target of the recovery is incorrectly
specified" are you thinking of? For example, you're thinking the case where
invalid format of timestamp value is specified in recovery_target_time?
In this case, I think that we should issue a WARNING and ignore that invalid
setting when the configuration file is reloaded. Of course, if it's the server
startup time, such invalid setting should prevent the server from starting up.

Regarding recovery_target_timeline, isn't it better to mark that parameter
as PGC_POSTMASTER? I'm not sure if we really want to change the recovery
target timeline without server restart and also not sure if that operation is
safe.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] background sessions

2017-01-11 Thread Andrew Borodin
2017-01-12 9:01 GMT+05:00 Peter Eisentraut :
> On 1/10/17 10:58 AM, Robert Haas wrote:
>> On Thu, Dec 29, 2016 at 5:18 PM, Peter Eisentraut
>>  wrote:
>>> For additional entertainment, I include patches that integrate
>>> background sessions into dblink.  So dblink can open a connection to a
>>> background session, and then you can use the existing dblink functions
>>> to send queries, read results, etc.  People use dblink to make
>>> self-connections to get autonomous subsessions, so this would directly
>>> address that use case.  The 0001 patch is some prerequisite refactoring
>>> to remove an ugly macro mess, which is useful independent of this.  0002
>>> is the actual patch.
>>
>> Would that constitute a complete replacement for pg_background?
>
> I think so.
That constitute replacement on the set of existing functionality.
It's not certain whether new features for pg_background would be
coherent with db_link ideology.
E.g. if one day we implement data exchange between two running queries
for pg_background, it would be in conflict with db_link ideology.

I have not opinion on is db_link or pg_background apropriate place for
this functionality. Just mentioning some thoughts.

BTW can we have an automatic FWD for bgsessions? Sounds crazy for me,
but technically make sense.

Best regards, Andrey Borodin.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] many copies of atooid() and oid_cmp()

2017-01-11 Thread Kuntal Ghosh
On Wed, Jan 11, 2017 at 9:42 PM, Peter Eisentraut
 wrote:
> There are approximately 11 copies of atooid() and 3 of oid_cmp() or
> equivalent, and pending patches are proposing to add more.  I propose
> these two patches to collect them in central places.
>
I've verified that the patch covers all the copies of atooid() and
oid_cmp() that should be replaced. However, as Tom suggested, I've
looked into pg_dump and found that there is a oidcmp() as well. It may
have been missed because it has a different name.

I was wondering whether it is worth to replace the following as well:
(TransactionId)strtoul(str, NULL, 16) to something like #define atotranid().

-- 
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Misplacement of function declaration in contrib/postgres_fdw/postgres_fdw.h

2017-01-11 Thread Ashutosh Bapat
On Thu, Jan 12, 2017 at 8:49 AM, Etsuro Fujita
 wrote:
> Hi,
>
> While working on pushing down more joins/updates to the remote, I noticed
> that in contrib/postgres_fdw/postgres_fdw.h the declaration of
> get_jointype_name is misplaced in the section of shippable.c.  Since that
> function is defined in contrib/postgres_fdw/deparse.c, we should put that
> declaration in the section of deparse.c in the header file. Attached is a
> patch for that.
>

I think, initially (probably in a never committed patch) the function
was used to check whether a join type is shippable and if so return
the name to be used in the query. That may be the reason why it ended
up in shippability.c. But later the shippability test was separated
from the code which required the string representation. Thanks for
pointing out the descripancy. The patch looks good. As a side change,
should we include "JOIN" in the string returned by this fuction? The
two places where this function is called, append "JOIN" to the string
returned by this function. Although, even without that change, the
patch looks good.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Couple of issues with prepared FETCH commands

2017-01-11 Thread Tom Lane
Robert Haas  writes:
> On Tue, Jan 10, 2017 at 5:38 PM, Andrew Gierth
>  wrote:
>> But the problem that actually came up is this: if you do the PQprepare
>> before the named cursor has actually been opened, then everything works
>> _up until_ the first event, such as a change to search_path, that forces
>> a revalidation; and at that point it fails with the "must not change
>> result type" error _even if_ the cursor always has exactly the same
>> result type.  This happens because the initial prepare actually stored
>> NULL for plansource->resultDesc, since the cursor name wasn't found,
>> while on the revalidate, when the cursor obviously does exist, it gets
>> the actual result type.
>> 
>> It seems a bit of a "gotcha" to have it fail in this case when the
>> result type isn't actually being checked in other cases.

> To me, that sounds like a bug.

Yeah --- specifically, I wonder why we allow the reference to an
unrecognized cursor name to succeed.  Or were you defining the bug
differently?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] many copies of atooid() and oid_cmp()

2017-01-11 Thread Tom Lane
Peter Eisentraut  writes:
> There are approximately 11 copies of atooid() and 3 of oid_cmp() or
> equivalent, and pending patches are proposing to add more.  I propose
> these two patches to collect them in central places.

+1 for the concept, but I'm a bit worried about putting atooid() in
postgres_ext.h.  That's going to impose on the namespace of libpq-using
applications, for instance.  A more conservative answer would be to
add it to c.h.  OTOH, postgres_ext.h is where the Oid typedef lives,
so I do see the consistency of adding this there.  Hard choice.

The oid_cmp() move looks fine if we only need it on the server side.
But doesn't pg_dump have one too?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Passing query string to workers

2017-01-11 Thread Tom Lane
Robert Haas  writes:
> On Wed, Jan 11, 2017 at 6:36 PM, Tom Lane  wrote:
>> That would work, if you had a way to get at the active QueryDesc ...
>> but we don't pass that down to executor nodes.

> Hmm, that is a bit of a problem.  Do you have a suggestion?

Copy that string pointer into the EState, perhaps?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WARM and indirect indexes

2017-01-11 Thread Pavan Deolasee
On Thu, Jan 12, 2017 at 3:08 AM, Robert Haas  wrote:

> On Tue, Jan 10, 2017 at 2:24 PM, Alvaro Herrera
>  wrote:
> > The big advantage of WARM is that it works automatically, like HOT: the
> > user doesn't need to do anything different than today to get the
> > benefit.  With indirect indexes, the user needs to create the index as
> > indirect explicitely.
>
> However, this cuts both ways.  If the WARM implementation has bugs --
> either data-corrupting bugs or crash bugs or returns-wrong-answer bugs
> or performance-in-corner-cases bugs -- everyone will be exposed to
> them.



IMHO WARM is way less complicated or intrusive than HOT was. It doesn't
change any of the MVCC mechanics or doesn't change when and how tuples are
marked dead or when and how dead tuples are removed. What it changes is how
tuples are indexed and accessed via index methods. So I believe bugs in
this area can possibility corrupt indexes or return wrong results, which is
bad but may have happened with many other patches we did in recent past.
The other thing the patch changes is how update-chain is maintained. In
order to quickly find the root offset while updating a tuple, we now store
the root offset in the t_ctid field of the last tuple in the chain and use
a separate bit to mark end-of-the-chain (instead of relying of t_ctid =
t_self check). That can lead to problems if chains are not maintained or
followed correctly. These changes are in the first patch of the patch
series and if you've any suggestions on how to improve that or solidify
chain following, please let me know. I was looking for some way to hide
t_ctid field to ensure that the links are only accessed via some standard
API.

I think as a developer of the patch, what I would like to know is what can
we do address concerns raised by you? What kind of tests you would like to
do to get confidence in the patch? What I've done so far is to rely on the
existing tests such as regression, isolation and pgbench. After adding
support for system tables, the code gets exercised even more during
regression tests, which is good. I also performed a few tests where I would
turn sequential scan off and then run "make installcheck" and compare
regression diffs between master and patched code. That helps because the
index access paths are used even more often. I did not find any bugs in
those tests.

My favourite test during HOT development was to run pgbench with large
number of clients and periodically check for data consistency while tests
are running, by comparing sum(tbalance), sum(bbalance) and sum(abalance)
values. I'm yet to do that kind of test with WARM because that would
require a slightly different test setup (more indexes and more update
statements), but I intend to do those tests too. I have also started
writing regression test cases which could lead to some corner cases and
share them for inclusion irrespective of WARM.

Please share your thoughts on what more can be and should be done.

Thanks,
Pavan
-- 
 Pavan Deolasee   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] background sessions

2017-01-11 Thread Peter Eisentraut
On 1/10/17 10:58 AM, Robert Haas wrote:
> On Thu, Dec 29, 2016 at 5:18 PM, Peter Eisentraut
>  wrote:
>> For additional entertainment, I include patches that integrate
>> background sessions into dblink.  So dblink can open a connection to a
>> background session, and then you can use the existing dblink functions
>> to send queries, read results, etc.  People use dblink to make
>> self-connections to get autonomous subsessions, so this would directly
>> address that use case.  The 0001 patch is some prerequisite refactoring
>> to remove an ugly macro mess, which is useful independent of this.  0002
>> is the actual patch.
> 
> Would that constitute a complete replacement for pg_background?

I think so.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Retiring from the Core Team

2017-01-11 Thread Joe Conway
On 01/11/2017 04:29 PM, Josh Berkus wrote:
> For that reason, as of today, I am stepping down from the PostgreSQL
> Core Team.



> It's been a long, fun ride, and I'm proud of the PostgreSQL we have
> today: both the database, and the community.  Thank you for sharing it
> with me.

End of an era! Thank you for your many efforts for the community, past,
present, and future. Hope we'll still see you at various events during
the year.

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Declarative partitioning - another take

2017-01-11 Thread Amit Langote
On 2017/01/06 20:23, Amit Langote wrote:
> On 2017/01/05 3:26, Robert Haas wrote:
>> It's unclear to me why we need to do 0002.  It doesn't seem like it
>> should be necessary, it doesn't seem like a good idea, and the commit
>> message you proposed is uninformative.
>
> If a single BulkInsertState object is passed to
> heap_insert()/heap_multi_insert() for different heaps corresponding to
> different partitions (from one input tuple to next), tuples might end up
> going into wrong heaps (like demonstrated in one of the reports [1]).  A
> simple solution is to disable bulk-insert in case of partitioned tables.
>
> But my patch (or its motivations) was slightly wrongheaded, wherein I
> conflated multi-insert stuff and bulk-insert considerations.  I revised
> 0002 to not do that.

Ragnar Ouchterlony pointed out [1] on pgsql-bugs that 0002 wasn't correct.
Attaching updated 0002 along with rebased 0001 and 0003.

Thanks,
Amit

[1]
https://www.postgresql.org/message-id/732dfc84-25f5-413c-1eee-0bfa7a370093%40agama.tv
>From f2a64348021c7dba1f96d0c8b4e3e253f635b019 Mon Sep 17 00:00:00 2001
From: amit 
Date: Wed, 28 Dec 2016 10:10:26 +0900
Subject: [PATCH 1/3] Set ecxt_scantuple correctly for tuple-routing

In 2ac3ef7a01df859c62d0a02333b646d65eaec5ff, we changed things so that
it's possible for a different TupleTableSlot to be used for partitioned
tables at successively lower levels.  If we do end up changing the slot
from the original, we must update ecxt_scantuple to point to the new one
for partition key of the tuple to be computed correctly.

Also update the regression tests so that the code manipulating
ecxt_scantuple is covered.

Reported by: Rajkumar Raghuwanshi
Patch by: Amit Langote
Reports: https://www.postgresql.org/message-id/CAKcux6%3Dm1qyqB2k6cjniuMMrYXb75O-MB4qGQMu8zg-iGGLjDw%40mail.gmail.com
---
 src/backend/catalog/partition.c  | 29 ++---
 src/backend/executor/execMain.c  |  2 --
 src/test/regress/expected/insert.out |  2 +-
 src/test/regress/sql/insert.sql  |  2 +-
 4 files changed, 24 insertions(+), 11 deletions(-)

diff --git a/src/backend/catalog/partition.c b/src/backend/catalog/partition.c
index f54e1bdf3f..0de1cf245a 100644
--- a/src/backend/catalog/partition.c
+++ b/src/backend/catalog/partition.c
@@ -1643,7 +1643,10 @@ get_partition_for_tuple(PartitionDispatch *pd,
 	bool		isnull[PARTITION_MAX_KEYS];
 	int			cur_offset,
 cur_index;
-	int			i;
+	int			i,
+result;
+	ExprContext *ecxt = GetPerTupleExprContext(estate);
+	TupleTableSlot *ecxt_scantuple_old = ecxt->ecxt_scantuple;
 
 	/* start with the root partitioned table */
 	parent = pd[0];
@@ -1672,7 +1675,14 @@ get_partition_for_tuple(PartitionDispatch *pd,
 			slot = myslot;
 		}
 
-		/* Extract partition key from tuple */
+		/*
+		 * Extract partition key from tuple; FormPartitionKeyDatum() expects
+		 * ecxt_scantuple to point to the correct tuple slot (which might be
+		 * different from the slot we received from the caller if the
+		 * partitioned table of the current level has different tuple
+		 * descriptor from its parent).
+		 */
+		ecxt->ecxt_scantuple = slot;
 		FormPartitionKeyDatum(parent, slot, estate, values, isnull);
 
 		if (key->strategy == PARTITION_STRATEGY_RANGE)
@@ -1727,16 +1737,21 @@ get_partition_for_tuple(PartitionDispatch *pd,
 		 */
 		if (cur_index < 0)
 		{
+			result = -1;
 			*failed_at = RelationGetRelid(parent->reldesc);
-			return -1;
+			break;
 		}
-		else if (parent->indexes[cur_index] < 0)
-			parent = pd[-parent->indexes[cur_index]];
-		else
+		else if (parent->indexes[cur_index] >= 0)
+		{
+			result = parent->indexes[cur_index];
 			break;
+		}
+		else
+			parent = pd[-parent->indexes[cur_index]];
 	}
 
-	return parent->indexes[cur_index];
+	ecxt->ecxt_scantuple = ecxt_scantuple_old;
+	return result;
 }
 
 /*
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index ff277d300a..6a9bc8372f 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -3167,9 +3167,7 @@ ExecFindPartition(ResultRelInfo *resultRelInfo, PartitionDispatch *pd,
 {
 	int		result;
 	Oid		failed_at;
-	ExprContext *econtext = GetPerTupleExprContext(estate);
 
-	econtext->ecxt_scantuple = slot;
 	result = get_partition_for_tuple(pd, slot, estate, _at);
 	if (result < 0)
 	{
diff --git a/src/test/regress/expected/insert.out b/src/test/regress/expected/insert.out
index ca3134c34c..1c7b8047ee 100644
--- a/src/test/regress/expected/insert.out
+++ b/src/test/regress/expected/insert.out
@@ -302,7 +302,7 @@ drop cascades to table part_ee_ff1
 drop cascades to table part_ee_ff2
 -- more tests for certain multi-level partitioning scenarios
 create table p (a int, b int) partition by range (a, b);
-create table p1 (b int, a int not null) partition by range (b);
+create table p1 (b int not null, a int not null) partition by range ((b+0));
 create table p11 (like p1);
 alter table p11 drop a;
 alter table p11 add a 

Re: [HACKERS] parallelize queries containing subplans

2017-01-11 Thread Amit Kapila
On Tue, Jan 10, 2017 at 11:55 AM, Dilip Kumar  wrote:
> On Wed, Dec 28, 2016 at 11:47 AM, Amit Kapila  wrote:
>> Attached patch implements the above idea.  This will enable
>> parallelism for queries containing un-correlated subplans, an example
>> of which is as follows:
>
> I have reviewed the patch (pq_pushdown_subplan_v1.patch), Mostly patch
> looks clean to me.
> I have also done some basic functional testing and it's working fine.
>
> I have one comment.
>
> + else if (IsA(node, AlternativeSubPlan))
> + {
> + AlternativeSubPlan *asplan = (AlternativeSubPlan *) node;
> + ListCell   *lc;
> +
> + Assert(context->root);
> +
> + foreach(lc, asplan->subplans)
> + {
> + SubPlan*splan = (SubPlan *) lfirst(lc);
> + Plan   *plan;
> +
> + Assert(IsA(splan, SubPlan));
> +
> + plan = planner_subplan_get_plan(context->root, splan);
> + if (!plan->parallel_safe)
> + return true;
> + }
>   }
>
> I see no reason why we need to process the subplan list of
> AlternativeSubPlan here, Anyway expression_tree_walker will take care
> of processing each subplan of AlternativeSubPlan. Now in the case
> where all the subplans in AlternativeSubPlan are parallel_safe we will
> process this list twice.
>

Valid point, but I think we can avoid that by returning false after
foreach(..) loop.  I think one improvement could be that instead of
manually checking the parallel safety of each subplan, we can
recursively call max_parallel_hazard_walker for each subplan.


> But, more than that it will be cleaner to not
> handle AlternativeSubPlan here unless there is some strong reason?
>

Yeah, the reason is to avoid unnecessary recursion.  Also, in all
other places in the code, we do handle both of them separately, refer
cost_qual_eval_walker for somewhat similar usage.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: [[Parallel] Shared] Hash

2017-01-11 Thread Thomas Munro
On Wed, Jan 11, 2017 at 2:56 PM, Peter Geoghegan  wrote:
> On Fri, Jan 6, 2017 at 12:01 PM, Thomas Munro
>  wrote:
>> Here is a new WIP patch.  I have plenty of things to tidy up (see note
>> at end), but the main ideas are now pretty clear and I'd appreciate
>> some feedback.
>
> I have some review feedback for your V3. I've chosen to start with the
> buffile.c stuff, since of course it might share something with my
> parallel tuplesort patch. This isn't comprehensive, but I will have
> more comprehensive feedback soon.

Thanks!

> I'm not surprised that you've generally chosen to make shared BufFile
> management as simple as possible, with no special infrastructure other
> than the ability to hold open other backend temp files concurrently
> within a worker, and no writing to another worker's temp file, or
> shared read pointer. As you put it, everything is immutable. I
> couldn't see much opportunity for adding a lot of infrastructure that
> wasn't written explicitly as parallel hash join code/infrastructure.
> My sense is that that was a good decision. I doubted that you'd ever
> want some advanced, generic shared BufFile thing with multiple read
> pointers, built-in cache coherency, etc. (Robert seemed to think that
> you'd go that way, though.)

Right, this is extremely minimalist infrastructure.  fd.c is
unchanged.  buffile.c only gains the power to export/import read-only
views of BufFiles.  There is no 'unification' of BufFiles: each hash
join participant simply reads from the buffile it wrote, and then
imports and reads from its peers' BufFiles, until all are exhausted;
so the 'unification' is happening in caller code which knows about the
set of participants and manages shared read positions.  Clearly there
are some ownership/cleanup issues to straighten out, but I think those
problems are fixable (probably involving refcounts).

I'm entirely willing to throw that away and use the unified BufFile
concept, if it can be extended to support multiple readers of the
data, where every participant unifies the set of files.  I have so far
assumed that it would be most efficient for each participant to read
from the file that it wrote before trying to read from files written
by other participants.  I'm reading your patch now; more soon.

> Anyway, some more specific observations:
>
> * ISTM that this is the wrong thing for shared BufFiles:
>
>> +BufFile *
>> +BufFileImport(BufFileDescriptor *descriptor)
>> +{
> ...
>> +   file->isInterXact = true; /* prevent cleanup by this backend */
>
> There is only one user of isInterXact = true BufFiles at present,
> tuplestore.c. It, in turn, only does so for cases that require
> persistent tuple stores. A quick audit of these tuplestore.c callers
> show this to just be cursor support code within portalmem.c. Here is
> the relevant tuplestore_begin_heap() rule that that code adheres to,
> unlike the code I've quoted above:
>
>  * interXact: if true, the files used for on-disk storage persist beyond the
>  * end of the current transaction.  NOTE: It's the caller's responsibility to
>  * create such a tuplestore in a memory context and resource owner that will
>  * also survive transaction boundaries, and to ensure the tuplestore is closed
>  * when it's no longer wanted.

Hmm.  Yes, that is an entirely bogus use of isInterXact.  I am
thinking about how to fix that with refcounts.

> I don't think it's right for buffile.c to know anything about file
> paths directly -- I'd say that that's a modularity violation.
> PathNameOpenFile() is called by very few callers at the moment, all of
> them very low level (e.g. md.c), but you're using it within buffile.c
> to open a path to the file that you obtain from shared memory

Hmm.  I'm not seeing the modularity violation.  buffile.c uses
interfaces already exposed by fd.c to do this:  OpenTemporaryFile,
then FilePathName to find the path, then PathNameOpenFile to open from
another process.  I see that your approach instead has client code
provide more meta data so that things can be discovered, which may
well be a much better idea.

> directly. This is buggy because the following code won't be reached in
> workers that call your BufFileImport() function:
>
> /* Mark it for deletion at close */
> VfdCache[file].fdstate |= FD_TEMPORARY;
>
> /* Register it with the current resource owner */
> if (!interXact)
> {
> VfdCache[file].fdstate |= FD_XACT_TEMPORARY;
>
> ResourceOwnerEnlargeFiles(CurrentResourceOwner);
> ResourceOwnerRememberFile(CurrentResourceOwner, file);
> VfdCache[file].resowner = CurrentResourceOwner;
>
> /* ensure cleanup happens at eoxact */
> have_xact_temporary_files = true;
> }

Right, that is a problem.  A refcount mode could fix that; virtual
file descriptors would be closed in every backend using the current
resource owner, and the files would be deleted when the last one turns
out the 

[HACKERS] Misplacement of function declaration in contrib/postgres_fdw/postgres_fdw.h

2017-01-11 Thread Etsuro Fujita

Hi,

While working on pushing down more joins/updates to the remote, I  
noticed that in contrib/postgres_fdw/postgres_fdw.h the declaration of  
get_jointype_name is misplaced in the section of shippable.c.  Since  
that function is defined in contrib/postgres_fdw/deparse.c, we should  
put that declaration in the section of deparse.c in the header file.  
Attached is a patch for that.


Best regards,
Etsuro Fujita
*** a/contrib/postgres_fdw/postgres_fdw.h
--- b/contrib/postgres_fdw/postgres_fdw.h
***
*** 163,172  extern void deparseSelectStmtForRel(StringInfo buf, PlannerInfo *root,
  		RelOptInfo *foreignrel, List *tlist,
  		List *remote_conds, List *pathkeys,
  		List **retrieved_attrs, List **params_list);
  
  /* in shippable.c */
  extern bool is_builtin(Oid objectId);
  extern bool is_shippable(Oid objectId, Oid classId, PgFdwRelationInfo *fpinfo);
- extern const char *get_jointype_name(JoinType jointype);
  
  #endif   /* POSTGRES_FDW_H */
--- 163,172 
  		RelOptInfo *foreignrel, List *tlist,
  		List *remote_conds, List *pathkeys,
  		List **retrieved_attrs, List **params_list);
+ extern const char *get_jointype_name(JoinType jointype);
  
  /* in shippable.c */
  extern bool is_builtin(Oid objectId);
  extern bool is_shippable(Oid objectId, Oid classId, PgFdwRelationInfo *fpinfo);
  
  #endif   /* POSTGRES_FDW_H */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Passing query string to workers

2017-01-11 Thread Robert Haas
On Wed, Jan 11, 2017 at 6:36 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Wed, Jan 11, 2017 at 7:37 AM, Tom Lane  wrote:
>>> As far as reproducing the pg_stat_activity query goes, you could probably
>>> grab that string out of the master backend's pgstat entry and pass it over
>>> at parallel query start.  But please don't confuse it with either
>>> debug_query_string or the string referenced by the QueryDesc; I do not
>>> think there's any guarantee that those are the same.
>
>> I think we should pass only the string referenced by the QueryDesc to
>> the worker, and on the worker side report that via debug_query_string
>> and pg_stat_activity and attach it to the QueryDesc itself.  Is that
>> also your view?
>
> That would work, if you had a way to get at the active QueryDesc ...
> but we don't pass that down to executor nodes.

Hmm, that is a bit of a problem.  Do you have a suggestion?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallelize queries containing subplans

2017-01-11 Thread Robert Haas
On Wed, Jan 11, 2017 at 9:58 PM, Amit Kapila  wrote:
> The other alternative is to remember this information in SubPlan.  We
> can retrieve parallel_safe information from best_path and can use it
> while generating SubPlan.  The main reason for storing it in the plan
> was to avoid explicitly passing parallel_safe information while
> generating SubPlan as plan was already available at that time.
> However, it seems there are only two places in code (refer
> build_subplan) where this information needs to be propagated.  Let me
> know if you prefer to remember the parallel_safe information in
> SubPlan instead of in Plan or if you have something else in mind?

I think we should try doing it in the SubPlan, at least, and see if
that comes out more elegant than what you have at the moment.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-11 Thread Merlin Moncure
On Wed, Jan 11, 2017 at 2:57 PM, Robert Haas  wrote:
> - The E'' syntax and the standard_conforming_strings GUC were added in
> PostgreSQL 8.0.  The only legal value of standard_conforming_strings
> was "false".
>
> - In PostgreSQL 8.1, it became possible to set
> standard_conforming_strings to "true", but the default was still
> "false".
>
> - In PostgreSQL 9.1, the default was changed to "true".
>
> So there 6 major release from the time the GUC was added and 5 from
> the time it became mutable before the default was flipped.   We've now
> had 5 more since the default was changed to "true".  (No, it's not
> time to remove the GUC yet.  At least not in my opinion.)
>
> One thing that made changing standard_conforming_strings particularly
> painful was that it had knock-on effects on many language-specific
> drivers not maintained by the core project (or just plain not
> maintained).  I don't think the language changes being proposed here
> for PL/pgsql would have the same kind of impact, but some of them
> would make it significantly harder to migrate to PostgreSQL from
> Oracle, which some people might see as an anti-goal (as per other
> nearby threads on making that easier).

I don't think it's a simple matter of waiting N or N+M releases
(although I certainly did appreciate that we did it regardless).  It
comes down to this: there's just no way to release changes that break
a lot of code without breaking a lot of code.  It's all about
acknowledging that and judging it acceptable against the benefits you
get.   For posterity, with respect to conforming strings, SQL
injection is an absolute scourge of the computing world so on balance
we did the right thing.  Having said that, It's always good to do the
math and the calculation is primarily an economic one, I think,

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Gather Merge

2017-01-11 Thread Robert Haas
On Sun, Dec 4, 2016 at 7:36 PM, Haribabu Kommi  wrote:
> On Thu, Nov 24, 2016 at 11:12 PM, Rushabh Lathia 
> wrote:
>> PFA latest patch with fix as well as few cosmetic changes.
>
> Moved to next CF with "needs review" status.

I spent quite a bit of time on this patch over the last couple of
days.  I was hoping to commit it, but I think it's not quite ready for
that yet and I hit a few other issues along the way.  Meanwhile,
here's an updated version with the following changes:

* Adjusted cost_gather_merge because we don't need to worry about less
than 1 worker.
* Don't charge double maintenance cost of the heap per 34ca0905.  This
was pointed out previous and Rushabh said it was fixed, but it wasn't
fixed in v5.
* cost_gather_merge claimed to charge a slightly higher IPC cost
because we have to block, but didn't.  Fix it so it does.
* Move several hunks to more appropriate places in the file, near
related code or in a more logical position relative to surrounding
code.
* Fixed copyright dates for the new files.  One said 2015, one said 2016.
* Removed unnecessary code from create_gather_merge_plan that tried to
handle an empty list of pathkeys (shouldn't happen).
* Make create_gather_merge_plan more consistent with
create_merge_append_plan.  Remove make_gather_merge for the same
reason.
* Changed generate_gather_paths to generate gather merge paths.  In
the previous coding, only the upper planner nodes ever tried to
generate gather merge nodes, but that seems unnecessarily limiting,
since it could be useful to generate a gathered path with pathkeys at
any point in the tree where we'd generate a gathered path with no
pathkeys.
* Rewrote generate_ordered_paths() logic to consider only the one
potentially-useful path not now covered by the new code in
generate_gather_paths().
* Reverted changes in generate_distinct_paths().  I think we should
add something here but the existing logic definitely isn't right
considering the change to generate_gather_paths().
* Assorted cosmetic cleanup in nodeGatherMerge.c.
* Documented the new GUC enable_gathermerge.
* Improved comments.  Dropped one that seemed unnecessary.
* Fixed parts of the patch to be more pgindent-clean.

Testing this against the TPC-H queries at 10GB with
max_parallel_workers_per_gather = 4, seq_page_cost = 0.1,
random_page_cost = 0.1, work_mem = 64MB initially produced somewhat
demoralizing results.  Only Q17, Q4, and Q8 picked Gather Merge, and
of those only Q17 got faster.  Investigating this led to me realizing
that join costing for parallel joins is all messed up: see
https://www.postgresql.org/message-id/CA+TgmoYt2pyk2CTyvYCtFySXN=jsorgh8_mjttlowu5qkjo...@mail.gmail.com

With that patch applied, in my testing, Gather Merge got picked for
Q3, Q4, Q5, Q6, Q7, Q8, Q10, and Q17, but a lot of those queries get a
little slower instead of a little faster.  Here are the timings --
these are with EXPLAIN ANALYZE, so take them with a grain of salt --
first number is without Gather Merge, second is with Gather Merge:

Q3 16943.938 ms -> 18645.957 ms
Q4 3155.350 ms -> 4179.431 ms
Q5 13611.484 ms -> 13831.946 ms
Q6 9264.942 ms -> 8734.899 ms
Q7 9759.026 ms -> 10007.307 ms
Q8 2473.899 ms -> 2459.225 ms
Q10 13814.950 ms -> 12255.618 ms
Q17 49552.298 ms -> 46633.632 ms

I haven't really had time to dig into these results yet, so I'm not
sure how "real" these numbers are and how much is run-to-run jitter,
EXPLAIN ANALYZE distortion, or whatever.  I think this overall concept
is good, because there should be cases where it's substantially
cheaper to preserve the order while gathering tuples from workers than
to re-sort afterwards.  But this particular set of results is a bit
lackluster.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


gather-merge-v6.patch
Description: application/download

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Do we support using agg or window functions in delete statement?

2017-01-11 Thread 高增琦
Thanks a lot for reply.

2017-01-11 20:46 GMT+08:00 Tom Lane :

> =?UTF-8?B?6auY5aKe55Cm?=  writes:
> > In transformDeleteStmt:
>
> > qry->hasWindowFuncs = pstate->p_hasWindowFuncs;
> > qry->hasAggs = pstate->p_hasAggs;
> > if (pstate->p_hasAggs)
> > parseCheckAggregates(pstate, qry);
>
> > Do we support using agg or window function in delete statement?
> > Or, this code should be removed?
>
> I think it's dead code given the syntactic limitations on DELETE,
> but I would not be in favor of removing it.  Better to have it there to
> keep transformDeleteStmt looking as much as possible like the other ones.
> It's not like that's either expensive or a lot of code.
>

At present, only transformSelectStmt and transformSetOperationStmt
has parseCheckAggregates. All other transformXXXStmt don't contain it.

This inconsistency makes me have the question at the first mail.
I think it maybe better to do something.



>
> An example of why this would be penny-wise and pound-foolish is that
> we might choose to apply the check that you can't write aggregates in
> DELETE inside parseCheckAggregates.  (We don't, but it's not an impossible
> future restructuring.)
>
> regards, tom lane
>



-- 
GaoZengqi
pgf...@gmail.com
zengqi...@gmail.com


Re: [HACKERS] parallelize queries containing subplans

2017-01-11 Thread Amit Kapila
On Tue, Jan 10, 2017 at 10:55 PM, Robert Haas  wrote:
> On Wed, Dec 28, 2016 at 1:17 AM, Amit Kapila  wrote:
>> Currently, queries that have references to SubPlans or
>> AlternativeSubPlans are considered parallel-restricted.  I think we
>> can lift this restriction in many cases especially when SubPlans are
>> parallel-safe.  To make this work, we need to propagate the
>> parallel-safety information from path node to plan node and the same
>> could be easily done while creating a plan.  Another option could be
>> that instead of propagating parallel-safety information from path to
>> plan, we can find out from the plan if it is parallel-safe (doesn't
>> contain any parallel-aware node) by traversing whole plan tree, but I
>> think it is a waste of cycles.  Once we have parallel-safety
>> information in the plan, we can use that for detection of
>> parallel-safe expressions in max_parallel_hazard_walker().  Finally,
>> we can pass all the subplans to workers during plan serialization in
>> ExecSerializePlan().  This will enable workers to execute subplans
>> that are referred in parallel part of the plan.  Now, we might be able
>> to optimize it such that we pass only subplans that are referred in
>> parallel portion of plan, but I am not sure if it is worth the trouble
>> because it is one-time cost and much lesser than other things we do
>> (like creating
>> dsm, launching workers).
>
> It seems unfortunate to have to add a parallel_safe flag to the
> finished plan; the whole reason we have the Path-Plan distinction is
> so that we can throw away information that won't be needed at
> execution time.  The parallel_safe flag is, in fact, not needed at
> execution time, but just for further planning.  Isn't there some way
> that we can remember, at the time when a sublink is converted to a
> subplan, whether or not the subplan was created from a parallel-safe
> path?
>

The other alternative is to remember this information in SubPlan.  We
can retrieve parallel_safe information from best_path and can use it
while generating SubPlan.  The main reason for storing it in the plan
was to avoid explicitly passing parallel_safe information while
generating SubPlan as plan was already available at that time.
However, it seems there are only two places in code (refer
build_subplan) where this information needs to be propagated.  Let me
know if you prefer to remember the parallel_safe information in
SubPlan instead of in Plan or if you have something else in mind?


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Retiring from the Core Team

2017-01-11 Thread Paul Ramsey
On Wed, Jan 11, 2017 at 4:29 PM, Josh Berkus  wrote:

>
> For that reason, as of today, I am stepping down from the PostgreSQL
> Core Team.
>

Massive thanks Josh, you've been a great advocate and a wonderful bridge
into the PgSQL community for those of us finding our way towards the warm
centre of PgSQL.
ATB,
P


Re: [HACKERS] CONNECTION LIMIT and Parallel Query don't play well together

2017-01-11 Thread David Rowley
> On 12 January 2017 at 09:36, Robert Haas  wrote:
>> One option is certainly to decide categorically that background
>> workers are not connections, and therefore CountUserBackends() should
>> ignore them and InitializeSessionUserId() shouldn't call it when the
>> session being started is a background worker.  That means that
>> background workers don't count against the user connection limit, full
>> stop.

I've attached a patch which intended to assist discussions on this topic.

The patch adds some notes to the docs to mention that background
workers and prepared xacts are not counted in CONNECTION LIMIT, it
then goes on and makes CountUserBackends() ignore bgworkers. It was
already ignoring prepared xacts. There's a bit of plumbing work to
make the proc array aware of the background worker status. Hopefully
this is suitable. I'm not all that close to that particular area of
the code.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


connection_limit_ignore_bgworkers_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] An isolation test for SERIALIZABLE READ ONLY DEFERRABLE

2017-01-11 Thread Michael Paquier
On Thu, Jan 12, 2017 at 4:09 AM, Robert Haas  wrote:
> On Tue, Jan 10, 2017 at 11:41 PM, Michael Paquier
>  wrote:
>> Do you think that expanding the wait query by default could be
>> intrusive for the other tests? I am wondering about such a white list
>> to generate false positives for the existing tests, including
>> out-of-core extensions, as all the tests now rely only on
>> pg_blocking_pids().
>
> It won't affect anything unless running at transaction isolation level
> serializable with the "read only deferrable" option.

Indeed as monitoring.sgml says. And that's now very likely close to
zero. It would be nice to add a comment in the patch to just mention
that. In short, I withdraw my concerns about this patch, the addition
of a test for repeatable read outweights the tweaks done in the
isolation tester. I am marking this as ready for committer, I have not
spotted issues with it.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Retiring from the Core Team

2017-01-11 Thread David Fetter
On Wed, Jan 11, 2017 at 04:29:19PM -0800, Josh Berkus wrote:
> Hackers:
> 
> You will have noticed that I haven't been very active for the past
> year.  My new work on Linux containers and Kubernetes has been even
> more absorbing than I anticipated, and I just haven't had a lot of
> time for PostgreSQL work.
> 
> For that reason, as of today, I am stepping down from the PostgreSQL
> Core Team.

Thanks for all your hard work.  It's not easy stepping into the (to
hackers) un-sexiest role in the project, namely PR.  I think I speak
for all of us when I express our appreciation for your stepping up to
this and staying at it, year after year.

Best,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david(dot)fetter(at)gmail(dot)com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multiple synchronous_standby_names rules

2017-01-11 Thread Michael Paquier
On Thu, Jan 12, 2017 at 9:53 AM, James Sewell  wrote:
> What is needed to support this is the ability to configure Px with something 
> like:
>
>  1 (P1, P2, P3), 1 (D1, D2, D3)
>
> Would there be any appetite for this - or would it be seen as over 
> complication of the current rules?

There have been discussions about being able to do that and there are
really use cases where that would be useful. As lately quorum commit
has been committed, we have a better idea of the grammar to use
(yeah!), though there are a couple of things remaining regarding the
design of node subsets:
- How to define group names? Making them mandatory would likely be the
way to go.
- How to represent that intuitively in pg_stat_replication? Perhaps
the answer here is an extra column in this system view.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Multiple synchronous_standby_names rules

2017-01-11 Thread James Sewell
Hello,

When working with a production (P) and a DR (D) environment it is often a
requirement to be able to protect against data loss when promoting within a
site, and also when losing A and promoting a master at D.

The current synchronous_standby_names do not allow this.

In a simple example we could take the following nodes:

P1 (current master), P2, P3
D1, D2, D3

Where P1 is replicating to (P2, P3, D1, D2, D3).

The closest synchronous_standby_names  setting you could get to my use case
would be:

1 (D1, D2, D3)

This would allow the loss of either site without losing data - however it
would not allow promotion within site P from P1 -> (P2 | P3)  without the
potential for data loss.

What is needed to support this is the ability to configure Px with
something like:

 1 (P1, P2, P3), 1 (D1, D2, D3)

Would there be any appetite for this - or would it be seen as over
complication of the current rules?

Cheers,


James Sewell,
PostgreSQL Team Lead / Solutions Architect



Suite 112, Jones Bay Wharf, 26-32 Pirrama Road, Pyrmont NSW 2009
*P *(+61) 2 8099 9000 <(+61)%202%208099%209000>  *W* www.jirotech.com  *F *
(+61) 2 8099 9099 <(+61)%202%208099%209000>

-- 

--
The contents of this email are confidential and may be subject to legal or 
professional privilege and copyright. No representation is made that this 
email is free of viruses or other defects. If you have received this 
communication in error, you may not copy or distribute any part of it or 
otherwise disclose its contents to anyone. Please advise the sender of your 
incorrect receipt of this correspondence.


Re: [HACKERS] Retiring from the Core Team

2017-01-11 Thread Stephen Frost
Josh,

* Josh Berkus (j...@agliodbs.com) wrote:
> For that reason, as of today, I am stepping down from the PostgreSQL
> Core Team.

I'm sure you'll hear this a lot, but:

Thank you.

Your leadership as a member of core and your focus on advocacy has
definitely helped this community and project move forward and your
contributions will be sorely missed.

I don't believe we could ever replace you, to be clear, but hopefully we
will be able to find others to step up in those areas you were focused
on and continue to grow as a community and a project.

Thanks again!

Stephen


signature.asc
Description: Digital signature


[HACKERS] Retiring from the Core Team

2017-01-11 Thread Josh Berkus
Hackers:

You will have noticed that I haven't been very active for the past year.
 My new work on Linux containers and Kubernetes has been even more
absorbing than I anticipated, and I just haven't had a lot of time for
PostgreSQL work.

For that reason, as of today, I am stepping down from the PostgreSQL
Core Team.

I joined the PostgreSQL Core Team in 2003.  I decided to take on project
advocacy, with the goal of making PostgreSQL one of the top three
databases in the world.  Thanks to the many contributions by both
advocacy volunteers and developers -- as well as the efforts by
companies like EnterpriseDB and Heroku -- we've achieved that goal.
Along the way, we proved that community ownership of an OSS project can
compete with, and ultimately outlast, venture-funded startups.

Now we need new leadership who can take PostgreSQL to the next phase of
world domination.  So I am joining Vadim, Jan, Thomas, and Marc in
clearing the way for others.

I'll still be around and still contributing to PostgreSQL in various
ways, mostly around running the database in container clouds.  It'll
take a while for me to hand off all of my PR responsibilities for the
project (assuming that I ever hand all of them off).

It's been a long, fun ride, and I'm proud of the PostgreSQL we have
today: both the database, and the community.  Thank you for sharing it
with me.

-- 
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Packages: Again

2017-01-11 Thread Craig Ringer
On 12 Jan. 2017 02:59, "Joshua D. Drake"  wrote:

-hackers,

I know we have talked about this before but today it was impressed upon me
rather firmly. I presented a Webinar: Postgres for Oracle People. The
attendees were 90% pl/pgsql developers. 330 people registered for an event
that was only allowed to host 100 people. The webinar went on for 2 hours.
(it was only scheduled for one hour, that is how interactive it was)

By far the tagline of this webinar from attendees was, "We can not port
without packages"


What aspects / features of packages were the key issues?

Initialisation?

Variables?

Performance features like pre compilation?

Signing?

Code obfuscation?

...?


So this is a reality. If we want tried and true Oracle developers to port
to PostgreSQL, we must provide some level of package capability.

There are some that would say we don't need them. You are right, we don't
need them. We should however want them if we want to continue to stomp
through the business sector and continue growth.

I use this post to inspire conversation on how we can get this done.

Sincerely,

JD

-- 
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Packages: Again

2017-01-11 Thread Gilles Darold
Le 11/01/2017 à 20:32, Pavel Stehule a écrit :
>
>
> 2017-01-11 19:57 GMT+01:00 Joshua D. Drake  >:
>
> -hackers,
>
> I know we have talked about this before but today it was impressed
> upon me rather firmly. I presented a Webinar: Postgres for Oracle
> People. The attendees were 90% pl/pgsql developers. 330 people
> registered for an event that was only allowed to host 100 people.
> The webinar went on for 2 hours. (it was only scheduled for one
> hour, that is how interactive it was)
>
> By far the tagline of this webinar from attendees was, "We can not
> port without packages"
>
> So this is a reality. If we want tried and true Oracle developers
> to port to PostgreSQL, we must provide some level of package
> capability.
>
> There are some that would say we don't need them. You are right,
> we don't need them. We should however want them if we want to
> continue to stomp through the business sector and continue growth.
>
>
> We have a schemas instead - the PostgreSQL schema is close to Oracle
> packages.
>
> What we cannot to substitute are package variables, now - see my
> proposal for session variables.
>
> Now I am working on migration some large Oracle project - I see more
> significant issues
>
> 1. no good tools - ora2pg do lot of work, but the PL/SQL -> PL/pgSQL
> migration support is basic
> 2. some things in Postgres are different - boolean type, enum types,
> date type, OUT parameters ..
> 3. some things are really different - NULL versus empty string
> 4. there are not good tools for postprocessing PL/pgSQL beautifier
> (formatter), SQL formatter
> 5. The developers still using Oracle outer joins - there are not 100%
> automatic migration
> 6. missing some common patterns for deployment, tests for really big
> set of code.
>
> Now I work on migration about 500K rows - and it is terrible work. It
> is 20 years old project - lot of code is not clean, It is hard to
> migrate, it is hard to clean. Sure, there is not one line of tests.
>
> If we miss some, then it is modern robust tool for migration - big
> thanks to ora2pg maintainers and developers - without it, there is
> nothing free.
>  
> Regards
>
> Pavel


Hi,

I'm currently working on release 19.0 of Ora2Pg, I hope to get it out
this weekend. This release has a major rewrite of the pl/psql rewriter.
Some of the issues you've reported to me Pavel are already solved in
this branch, some other have been fixed after your reports. The rewriter
has no more limitation in rewriting function call like decode(),
previous version was failing to rewrite function call when an other
function or sub select was called inside.

Ora2Pg has always used schema to replace package and I think it is the
good equivalent to Oracle's package, we don't need more. The only thing
that is missing are global variables. Release 19.0 of Ora2Pg will try to
address this problem by exporting global variables declared into the
package as PostgreSQL user defined custom variable and replace all call
to these variables in the plpgsql code by

 current_setting('schema_name.var_name')::var_type

and all affectation of these variables by

SELECT / PERFORM set_config('schema_name.var_name', var_value, false);

This works great but the only difference with Oracle's global variables
is that they are visible outside the schema where, in Oracle, they are
only visible in the package scope when declared in the package body.
Perhaps we can work on having these custom variables visible only in a
particular schema or some other mechanism that made them accessible from
the plpgsql code only. Ora2Pg doesn't propose any solution to cursors
declared as global variable yet.


Ora2Pg can fully rewrite Oracle's function with autonomous transaction
using a wrapper with a call to dblink or pg_background if you enable it.
It's just works perfectly.

About Oracle's OUTER JOIN notation, (+), next release of Ora2Pg will be
able to rewrite simple form of RIGHT OUTER JOIN, LEFT OUTER JOIN will
comes soon. Note that there is no automatic tool to rewrite the Oracle
outer join syntax, but you can use TOAD to convert the queries into ANSI
syntax unfortunately query per query and manually. Next version will
also add better support to export Oracle Text Search indexes using
pg_trgm, unaccent and FTS.

Including a SQL and plpgsql code beautifier is also in my todo list,
probably available in next major version 20.

In my opinion, Ora2Pg can do more automatic works but it need some more
developments. I would like to give all my time to improve this project
and give you a better tool but this is not really possible for the
moment and unfortunately my spare time is not extensible. I'm doing my
best to get out more releases, your reports/feedbacks help me a lot to
add more automatic migration code. I don't think it is possible to have
a 100% automatic migration because there will always be 

Re: [HACKERS] Passing query string to workers

2017-01-11 Thread Tom Lane
Robert Haas  writes:
> On Wed, Jan 11, 2017 at 7:37 AM, Tom Lane  wrote:
>> As far as reproducing the pg_stat_activity query goes, you could probably
>> grab that string out of the master backend's pgstat entry and pass it over
>> at parallel query start.  But please don't confuse it with either
>> debug_query_string or the string referenced by the QueryDesc; I do not
>> think there's any guarantee that those are the same.

> I think we should pass only the string referenced by the QueryDesc to
> the worker, and on the worker side report that via debug_query_string
> and pg_stat_activity and attach it to the QueryDesc itself.  Is that
> also your view?

That would work, if you had a way to get at the active QueryDesc ...
but we don't pass that down to executor nodes.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Passing query string to workers

2017-01-11 Thread Robert Haas
On Wed, Jan 11, 2017 at 7:37 AM, Tom Lane  wrote:
> Rafia Sabih  writes:
>> Approach:
>> A token for query string is created in the shared memory, this token is
>> populated with the query string using the global string --
>> debug_query_string. Now, for each of the worker when
>> ExecGetParallelQueryDesc is called, we retrieve the query text from shared
>> memory and pass it to CreateQueryDesc.
>
> This is simply wrong, because debug_query_string need have nothing to do
> with what the actual parallel query is.  I'm all for sending over the
> correct query, but if you aren't bothering to accurately reproduce the
> master's query descriptor, I think you will increase confusion not
> reduce it.
>
> As far as reproducing the pg_stat_activity query goes, you could probably
> grab that string out of the master backend's pgstat entry and pass it over
> at parallel query start.  But please don't confuse it with either
> debug_query_string or the string referenced by the QueryDesc; I do not
> think there's any guarantee that those are the same.

I think we should pass only the string referenced by the QueryDesc to
the worker, and on the worker side report that via debug_query_string
and pg_stat_activity and attach it to the QueryDesc itself.  Is that
also your view?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CONNECTION LIMIT and Parallel Query don't play well together

2017-01-11 Thread David Rowley
On 12 January 2017 at 09:36, Robert Haas  wrote:
> One option is certainly to decide categorically that background
> workers are not connections, and therefore CountUserBackends() should
> ignore them and InitializeSessionUserId() shouldn't call it when the
> session being started is a background worker.  That means that
> background workers don't count against the user connection limit, full
> stop.  Another option, probably slightly less easy to implement, is to
> decide that background workers in general count against the limit but
> parallel workers do not.

I think the root question here which we need to ask ourselves is, what
is "CONNECTION LIMIT" for. I seem to have come around to assuming it's
meant to be to protect the server to give everyone a fairer chance of
getting a connection to the database. Now, since background workers
don't consume anything from max_connections, then I don't really feel
that a background worker should count towards "CONNECTION LIMIT". I'd
assume any CONNECTION LIMITs that are set for a user would be
calculated based on what max_connections is set to. If we want to
limit background workers in the same manner, then perhaps we'd want to
invent something like "WORKER LIMIT N" in CREATE USER.

> The third option is to count both background
> workers and parallel workers against the limit but somehow recover
> gracefully when this error trips.  But I have no idea how we could
> implement that third option in a reasonable way.

I agree with your view on the third option. I looked at this too and
it seems pretty horrible to try and do anything in that direction. It
seems that even if we suppressed the ERROR message, and had the worker
exit, that we'd still briefly consume a background worker slot which
would reduce the chances of some entitle user connection obtaining
them, in fact, this is the case as it stands today, even if that
moment is brief.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: [[Parallel] Shared] Hash

2017-01-11 Thread Peter Geoghegan
On Wed, Jan 11, 2017 at 12:05 PM, Robert Haas  wrote:
> On Wed, Jan 11, 2017 at 2:20 PM, Peter Geoghegan  wrote:
>> You'd probably still want to throw an error when workers ended up not
>> deleting BufFile segments they owned, though, at least for parallel
>> tuplesort.
>
> Don't see why.

Simply because that's not expected as things stand -- why should the
file go away in that context? (Admittedly, that doesn't seem like an
excellent reason now.)

I actually like the idea of a reference count, the more I think about
it, since it doesn't actually have any tension with my original idea
of ownership. If something like a randomAccess parallel tuplesort
leader merge needs to write new segments (which it almost certainly
*won't* anyway, due to my recent V7 changes), then it can still own
those new segments itself, alone, and delete them on its own in the
manner of conventional temp files, because we can still restrict the
shared refcount mechanism to the deletion of "initial" segments. The
refcount == 0 deleter only deletes those initial segments, and not any
same-BufFile segments that might have been added (added to append to
our unified BufFile within leader). I think that parallel hash join
won't use this at all, and, as I said, it's only a theoretical
requirement for parallel tuplesort, which will generally recycle
blocks from worker temp files for its own writes all the time for
randomAccess cases, the only cases that ever write within logtape.c.

So, the only BufFile shared state needed, that must be maintained over
time, is the refcount variable itself. The size of the "initial"
BufFile (from which we derive number of new segments during
unification) is passed, but it doesn't get maintained in shared
memory. BufFile size remains a one way, one time message needed during
unification. I only really need to tweak things in fd.c temp routines
to make all this work.

This is something I like because it makes certain theoretically useful
things easier. Say, for example, we wanted to have tuplesort workers
merge worker final materialized tapes (their final output), in order
to arrange for the leader to have fewer than $NWORKER runs to merge at
the end -- that's made easier by the refcount stuff. (I'm still not
convinced that that's actually going to make CREATE INDEX faster.
Still, it should, on general principle, be easy to write a patch that
makes it happen -- a good overall design should leave things so that
writing that prototype patch is easy.)

>> This idea is something that's much more limited than the
>> SharedTemporaryFile() API that you sketched on the parallel sort
>> thread, because it only concerns resource management, and not how to
>> make access to the shared file concurrency safe in any special,
>> standard way.
>
> Actually, I only intended that sketch to be about resource management.
> Sounds like I didn't explain very well.

I'm glad to hear that, because I was very puzzled by what you said. I
guess I was thrown off by "shared read pointers". I don't want to get
into the business of flushing out dirty buffers, or making sure that
every backend stays in lockstep about what the total size of the
BufFile needs to be. It's so much simpler to just have clear
"barriers" for each parallel operation, where backends present a large
amount of immutable state to one other backend at the end, and tells
it how big its BufFile is only once. (It's not quite immutable, since
randomAccess recycle of temp files can happen within logtape.c, but
the point is that there should be very little back and forth -- that
needs to be severely restricted.)

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WARM and indirect indexes

2017-01-11 Thread Robert Haas
On Tue, Jan 10, 2017 at 2:24 PM, Alvaro Herrera
 wrote:
> The big advantage of WARM is that it works automatically, like HOT: the
> user doesn't need to do anything different than today to get the
> benefit.  With indirect indexes, the user needs to create the index as
> indirect explicitely.

However, this cuts both ways.  If the WARM implementation has bugs --
either data-corrupting bugs or crash bugs or returns-wrong-answer bugs
or performance-in-corner-cases bugs -- everyone will be exposed to
them.  The kinds of things that could go wrong here are in my view
very similar to the sorts of things that went wrong with multixacts.
If indirect indexes turn out to have similar problems, that's sad, but
people can decide not to use them and be no worse off than before.

(This is exactly why parallel query is now provoking periodic griping
rather than howls of agony - it's off by default in 9.6, and even if
you turn it on, the costing defaults are set fairly conservatively,
and even if it goes completely haywire due to some horrible bug, it's
very unlikely to break anything for longer than it takes you to shut
it back off.  It's possible that WARM bugs would only mess up your
indexes and not your table data, which is a lot less bad, but things
that touch the on-disk format are near the top of my "this is scary"
list.)

More broadly, I don't share Bruce's negativity about indirect indexes.
My estimate of what needs to be done for them to be really useful is -
I think - higher than your estimate of what needs to be done, but I
think the concept is great.  I also think that some of the concepts -
like allowing the tuple pointers to have widths other than 6 byes -
could turn out to be a good foundation for global indexes in the
future.  In fact, it might be considerably easier to make an indirect
index span a partitioning hierarchy than it would be to do the same
for a regular index.  But regardless of that, the feature is good for
what it offers today.

As to WARM, it scares me a lot.  If zero serious data-corrupting bugs
survive to final release, or even if no more than one or two do, and
if it improves performance in general in the way some of the offered
benchmarks indicate, fantastic.  But a feature of this type is by its
nature prone to eating your data in varied and subtle ways.  Caveat
committer.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical Replication WIP

2017-01-11 Thread Peter Eisentraut
On 1/11/17 3:35 PM, Petr Jelinek wrote:
> On 11/01/17 18:32, Peter Eisentraut wrote:
>> On 1/11/17 3:29 AM, Petr Jelinek wrote:
>>> Okay, looking into my notes, I originally did this because we did not
>>> allow adding tables without pkeys to publications which effectively
>>> prohibited FOR ALL TABLES publication from working because of
>>> information_schema without this. Since this is no longer the case I
>>> think it's safe to skip the FirstNormalObjectId check.
>>
>> Wouldn't that mean that FOR ALL TABLES replicates the tables from
>> information_schema?
>>
> 
> Yes, as they are not catalog tables, I thought that was your point.

But we shouldn't do that.  So we need to exclude information_schema from
"all tables" somehow.  Just probably not by OID, since that is not fixed.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical Replication WIP

2017-01-11 Thread Peter Eisentraut
On 1/11/17 3:35 PM, Petr Jelinek wrote:
> On 11/01/17 18:27, Peter Eisentraut wrote:
>> On 1/11/17 3:11 AM, Petr Jelinek wrote:
>>> That will not help, issue is that we consider names for origins to be
>>> unique across cluster while subscription names are per database so if
>>> there is origin per subscription (which there has to be) it will always
>>> clash if we just use the name. I already have locally changed this to
>>> pg_ naming scheme and it works fine.
>>
>> How will that make it unique across the cluster?
>>
>> Should we include the system ID from pg_control?
>>
> 
> pg_subscription is shared catalog so oids are unique.

Oh, I see what you mean by cluster now.  It's a confusing term.


-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Couple of issues with prepared FETCH commands

2017-01-11 Thread Andrew Gierth
> "Robert" == Robert Haas  writes:

 >> But the problem that actually came up is this: if you do the
 >> PQprepare before the named cursor has actually been opened, then
 >> everything works _up until_ the first event, such as a change to
 >> search_path, that forces a revalidation; and at that point it fails
 >> with the "must not change result type" error _even if_ the cursor
 >> always has exactly the same result type.  This happens because the
 >> initial prepare actually stored NULL for plansource->resultDesc,
 >> since the cursor name wasn't found, while on the revalidate, when
 >> the cursor obviously does exist, it gets the actual result type.
 >> 
 >> It seems a bit of a "gotcha" to have it fail in this case when the
 >> result type isn't actually being checked in other cases.

 Robert> To me, that sounds like a bug.

So what's the appropriate fix? My suggestion would be to suppress the
result type check entirely for utility statements; EXPLAIN and SHOW
always return the same thing anyway, and both FETCH and EXECUTE are
subject to the issue described. This would mean conceding that the
result descriptor of a prepared FETCH or EXECUTE might change (i.e. a
Describe of the statement might not be useful, though a Describe of an
opened portal would be ok). I think this would result in the most
obviously correct behavior from the client point of view.

-- 
Andrew (irc:RhodiumToad)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plan_rows confusion with parallel queries

2017-01-11 Thread Robert Haas
On Wed, Jan 11, 2017 at 1:24 PM, Robert Haas  wrote:
>> Well, it's not *that* consistent.  If we were estimating all the numbers
>> underneath the Gather as being per-worker numbers, that would make some
>> amount of sense.  But neither the other seqscan, nor the hash on it, nor
>> the hashjoin's output count are scaled that way.  It's very hard to call
>> the above display anything but flat-out broken.
>
> While investigating why Rushabh Lathia's Gather Merge patch sometimes
> fails to pick a Gather Merge plan even when it really ought to do so,
> I ran smack into this problem.  I discovered that this is more than a
> cosmetic issue.  The costing itself is actually badly broken.
>
> The reason why this is happening is that final_cost_nestloop(),
> final_cost_hashjoin(), and final_cost_mergejoin() don't care a whit
> about whether the path they are generating is partial.  They apply the
> row estimate for the joinrel itself to every such path generated for
> the join, except for parameterized paths which are a special case.  I
> think this generally has the effect of discouraging parallel joins,
> because the inflated row count also inflates the join cost.  I think
> the right thing to do is probably to scale the row count estimate for
> the joinrel's partial paths by the leader_contribution value computed
> in cost_seqscan.
>
> Despite my general hatred of back-patching things that cause plan
> changes, I'm inclined to think the fix for this should be back-patched
> to 9.6, because this is really a brown-paper-bag bug.  If the
> consensus is otherwise I will of course defer to that consensus.

And here is a patch which seems to fix the problem.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


parallel-join-rows-v1.patch
Description: application/download

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: function xmltable

2017-01-11 Thread Alvaro Herrera
Pavel Stehule wrote:

> another update - lot of cleaning

Thanks.

The more I look at this, the less I like using NameArgExpr for
namespaces.  It looks all wrong to me, and it causes ugly code all over.
Maybe I just need to look at it a bit longer.



-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Packages: Again

2017-01-11 Thread Robert Haas
On Wed, Jan 11, 2017 at 3:56 PM, Joshua D. Drake  wrote:
>> We have a schemas instead - the PostgreSQL schema is close to Oracle
>> packages.
>
> No. It isn't.

I'm gonna say "yeah, it is".

And that's all I will say about this topic.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-11 Thread Robert Haas
On Mon, Jan 9, 2017 at 6:53 PM, Jim Nasby  wrote:
> I do think that whichever route we go, we're going to be stuck supporting
> the old version for a LONG time. A big part of why
> standard_conforming_strings was so ugly is users didn't have enough time to
> adjust. If we'd had that enabled by default for 4-5 releases it wouldn't
> have been nearly as much of an issue.

/me boggles.  I think you are confused about the history here.
standard_conforming_strings had a generously long phase-in period.

- The E'' syntax and the standard_conforming_strings GUC were added in
PostgreSQL 8.0.  The only legal value of standard_conforming_strings
was "false".

- In PostgreSQL 8.1, it became possible to set
standard_conforming_strings to "true", but the default was still
"false".

- In PostgreSQL 9.1, the default was changed to "true".

So there 6 major release from the time the GUC was added and 5 from
the time it became mutable before the default was flipped.   We've now
had 5 more since the default was changed to "true".  (No, it's not
time to remove the GUC yet.  At least not in my opinion.)

One thing that made changing standard_conforming_strings particularly
painful was that it had knock-on effects on many language-specific
drivers not maintained by the core project (or just plain not
maintained).  I don't think the language changes being proposed here
for PL/pgsql would have the same kind of impact, but some of them
would make it significantly harder to migrate to PostgreSQL from
Oracle, which some people might see as an anti-goal (as per other
nearby threads on making that easier).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Packages: Again

2017-01-11 Thread Joshua D. Drake

On 01/11/2017 11:32 AM, Pavel Stehule wrote:



We have a schemas instead - the PostgreSQL schema is close to Oracle
packages.


No. It isn't.

A Package is essentially a class with dependencies. It has nothing to do 
with schemas outside of being named qualified. For example:


https://docs.oracle.com/cd/B19306_01/appdev.102/b14261/packages.htm#i4362

JD

--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore accepts -j -1

2017-01-11 Thread Stephen Frost
Ashutosh,

* Ashutosh Bapat (ashutosh.ba...@enterprisedb.com) wrote:
> On Tue, Jan 10, 2017 at 10:18 AM, Stephen Frost  wrote:
> > For reasons which seem likely to be entirely unintentional, pg_restore
> > will accept a '-1' for -j:
> >
> > pg_restore -j -1
> >
> > This seems to result in the parallel state being NULL and so things
> > don't outright break, but it hardly seems likely to be what the user was
> > asking for- my guess is that they actually wanted "parallel, single
> > transaction", which we don't actually support:
> >
> > -> pg_restore -j 2 -1
> > pg_restore: cannot specify both --single-transaction and multiple jobs
> >
> > We also don't accept -1 for pg_dump:
> >
> > -> pg_dump -j -1
> > pg_dump: invalid number of parallel jobs
> >
> > If I'm missing something, please let me know, otherwise I'll plan to put
> > the same check into pg_restore which exists in pg_dump.
> 
> Both the code blocks were added by 9e257a18, but I don't see any
> description of why they are different in pg_dump.c and pg_restore.c.
> In fact per comments in pg_restore.c, that condition should be same as
> pg_dump.c. I am not sure whether it's just for windows specific
> condition or the whole block. But I don't see any reason not to
> replicate the same conditions in pg_restore.c

Ok, I've pushed the change.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] CONNECTION LIMIT and Parallel Query don't play well together

2017-01-11 Thread Robert Haas
On Tue, Jan 10, 2017 at 4:14 PM, David Rowley
 wrote:
> It has come to my attention that when a user has a CONNECTION LIMIT
> set, and they make use of parallel query, that their queries can fail
> due to the connection limit being exceeded.

That's bad.

> Now, as I understand it, during the design of parallel query, it was
> designed in such a way that nodeGather could perform all of the work
> in the main process in the event that no workers were available, and
> that the only user visible evidence of this would be the query would
> be slower than it would otherwise be.

That was the intent.

> After a little bit of looking around I see that CountUserBackends()
> does not ignore the parallel workers, and counts these as
> "CONNECTIONS". It's probably debatable to weather these are
> connections or not, ...

Yeah.  I think that I looked at the connection limit stuff in the 9.4
time frame and said, well, we shouldn't let people use background
workers as a way of evading the connection limit, so let's continue to
count them against that limit.  Then, later on, I did the work to try
to make it transparent when sufficient parallel workers cannot be
obtained, but forgot about this case or somehow convinced myself that
it didn't matter.

One option is certainly to decide categorically that background
workers are not connections, and therefore CountUserBackends() should
ignore them and InitializeSessionUserId() shouldn't call it when the
session being started is a background worker.  That means that
background workers don't count against the user connection limit, full
stop.  Another option, probably slightly less easy to implement, is to
decide that background workers in general count against the limit but
parallel workers do not.  The third option is to count both background
workers and parallel workers against the limit but somehow recover
gracefully when this error trips.  But I have no idea how we could
implement that third option in a reasonable way.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical Replication WIP

2017-01-11 Thread Petr Jelinek
On 11/01/17 18:27, Peter Eisentraut wrote:
> On 1/11/17 3:11 AM, Petr Jelinek wrote:
>> That will not help, issue is that we consider names for origins to be
>> unique across cluster while subscription names are per database so if
>> there is origin per subscription (which there has to be) it will always
>> clash if we just use the name. I already have locally changed this to
>> pg_ naming scheme and it works fine.
> 
> How will that make it unique across the cluster?
> 
> Should we include the system ID from pg_control?
> 

pg_subscription is shared catalog so oids are unique.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical Replication WIP

2017-01-11 Thread Petr Jelinek
On 11/01/17 18:32, Peter Eisentraut wrote:
> On 1/11/17 3:29 AM, Petr Jelinek wrote:
>> Okay, looking into my notes, I originally did this because we did not
>> allow adding tables without pkeys to publications which effectively
>> prohibited FOR ALL TABLES publication from working because of
>> information_schema without this. Since this is no longer the case I
>> think it's safe to skip the FirstNormalObjectId check.
> 
> Wouldn't that mean that FOR ALL TABLES replicates the tables from
> information_schema?
> 

Yes, as they are not catalog tables, I thought that was your point.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pageinspect: Hash index support

2017-01-11 Thread Ashutosh Sharma
Hi,

>
>> + /*
>> +  * We copy the page into local storage to avoid holding pin on 
>> the
>> +  * buffer longer than we must, and possibly failing to release 
>> it at
>> +  * all if the calling query doesn't fetch all rows.
>> +  */
>> + mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
>> +
>> + uargs = palloc(sizeof(struct user_args));
>> +
>> + uargs->page = palloc(BLCKSZ);
>
> Is this necessary?  I think this was copied from btreefuncs, but there
> is no buffer release in this code.

Yes, it was copied from btreefuncs and is not required in this case as
we are already passing raw_page as an input to hash_page_items. I have
 taken care of it in the updated patch shared up thread.

With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Packages: Again

2017-01-11 Thread Pavel Stehule
2017-01-11 21:08 GMT+01:00 Bruce Momjian :

> On Wed, Jan 11, 2017 at 08:56:23PM +0100, Fabien COELHO wrote:
> >
> > >I think we need to focus on things that _can't_ be done first, rather
> > >than things that require porting, e.g. until we had savepoints, you
> > >couldn't migrate an application that needed it.  It wasn't a question of
> > >porting --- there was just no way to port it.
> > >
> > >Those _missing_ pieces should be a priority.
> >
> > Nested/autonomous transactions? Do they occur often in PL/SQL code?
>
> Yes, they do based on the number of "I can't port from Oracle"
> complaints we used to get, perhaps related to exceptions.  Once we had
> them, the complaints of that type disappeared.
>

We have not PL controllable transactions - so some patterns are not
available in our functions.

On second hand - currently all usage of explicit commit/rollback was
related to some Oracle issue (what I know)

1. missing easy debug printing - there was not nothing like RAISE NOTICE -
dbms_output - is poor solution

2. it was workaround for limited transaction size

In 90% it are solutions of issues that are not in Postgres. Can be nice to
have procedures - and it can be benefit for all, but it is not too big gap.
When you use postgres's patterns, then you don't need it - but there are
more work with migration.

Regards

Pavel



>
> --
>   Bruce Momjian  http://momjian.us
>   EnterpriseDB http://enterprisedb.com
>
> + As you are, so once was I.  As I am, so you will be. +
> +  Ancient Roman grave inscription +
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] Packages: Again

2017-01-11 Thread Bruce Momjian
On Wed, Jan 11, 2017 at 03:08:58PM -0500, Bruce Momjian wrote:
> On Wed, Jan 11, 2017 at 08:56:23PM +0100, Fabien COELHO wrote:
> > 
> > >I think we need to focus on things that _can't_ be done first, rather
> > >than things that require porting, e.g. until we had savepoints, you
> > >couldn't migrate an application that needed it.  It wasn't a question of
> > >porting --- there was just no way to port it.
> > >
> > >Those _missing_ pieces should be a priority.
> > 
> > Nested/autonomous transactions? Do they occur often in PL/SQL code?
> 
> Yes, they do based on the number of "I can't port from Oracle"
> complaints we used to get, perhaps related to exceptions.  Once we had
> them, the complaints of that type disappeared.

Oh, I was talking about savepoints/nested-transactions, not autonomous
transactions.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pageinspect: Hash index support

2017-01-11 Thread Ashutosh Sharma
Hi,

> +values[j++] = UInt16GetDatum(uargs->offset);
> +values[j++] = CStringGetTextDatum(psprintf("(%u,%u)",
> +
> BlockIdGetBlockNumber(&(itup->t_tid.ip_blkid)),
> +itup->t_tid.ip_posid));
> +
> +ptr = (char *) itup + IndexInfoFindDataOffset(itup->t_info);
> +dlen = IndexTupleSize(itup) - IndexInfoFindDataOffset(itup->t_info);
>
> It seems like this could be used to index off the end of the page, if
> you feed it invalid data.

okay, I have handled it in the attached patch.

>
> +dump = palloc0(dlen * 3 + 1);
>
> This is wasteful.  Just use palloc and install a terminating NUL byte instead.
>

fixed. Please check the attached patch.

> +sprintf(dump, "%02x", *(ptr + off) & 0xff);
>
> *(ptr + off) is normally written ptr[off].
>

corrected.

> +if (pageopaque->hasho_flag != LH_OVERFLOW_PAGE)
> +ereport(ERROR,
> +(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
> + errmsg("page is not an overflow page"),
> + errdetail("Expected %08x, got %08x.",
> +LH_OVERFLOW_PAGE, pageopaque->hasho_flag)));
>
> I think this is an unnecessary test given that you've already called
> verify_hash_page().
>

Yes, it is not required. I have removed it in the attached patch.

> +if (bitmappage >= metap->hashm_nmaps)
> +elog(ERROR, "invalid overflow bit number %u", ovflbitno);
>
> I think this should be an ereport(), because it's reachable given a
> bogus page which a user might construct (or a corrupted page).
>

okay, I have corrected it.

> +test=# SELECT * FROM hash_page_items(get_raw_page('con_hash_index', 1));
> + itemoffset |  ctid   |  data
> ++-+-
> +  1 | (3145728,14376) | 00 c0 ca 3e 00 00 00 00
> +  2 | (3145728,14376) | 00 c0 ca 3e 00 00 00 00
> +  3 | (3407872,14376) | 00 c0 ca 3e 00 00 00 00
>
> Won't the first 4 bytes always be a hash code and the second 4 bytes
> always 0?  Should we just return the hash code as an int4 or int8
> instead of pretending it's a bunch of uninterpretable binary data?
>

Yes, the first 4 bytes represents a hash code and the second 4 bytes
is always zero. Now, returning the hash code as int4.


> +test=# SELECT * FROM hash_bitmap_info('con_hash_index', 2050);
> + bitmapblkno | bitmapbit
> +-+---
> +  65 | 1
> +
>
> I find this hard to understand.  This says that it returns
> information, but the nature of the returned information is unspecified
> and in my opinion unclear.
>

I have rephrased it to make it more clear.


0001-Add-support-for-hash-index-in-pageinspect-contrib-mo.patch
Description: invalid/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Packages: Again

2017-01-11 Thread Pavel Stehule
2017-01-11 20:56 GMT+01:00 Fabien COELHO :

>
> I think we need to focus on things that _can't_ be done first, rather
>> than things that require porting, e.g. until we had savepoints, you
>> couldn't migrate an application that needed it.  It wasn't a question of
>> porting --- there was just no way to port it.
>>
>> Those _missing_ pieces should be a priority.
>>
>
> Nested/autonomous transactions? Do they occur often in PL/SQL code?
>

There is relative well working workaround - ora2pg is able to translate it
to dblink usage.

Sure - native solution can be better - usage pg_background is step forward.

Regards

Pavel


>
> --
> Fabien.
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] Packages: Again

2017-01-11 Thread Bruce Momjian
On Wed, Jan 11, 2017 at 08:56:23PM +0100, Fabien COELHO wrote:
> 
> >I think we need to focus on things that _can't_ be done first, rather
> >than things that require porting, e.g. until we had savepoints, you
> >couldn't migrate an application that needed it.  It wasn't a question of
> >porting --- there was just no way to port it.
> >
> >Those _missing_ pieces should be a priority.
> 
> Nested/autonomous transactions? Do they occur often in PL/SQL code?

Yes, they do based on the number of "I can't port from Oracle"
complaints we used to get, perhaps related to exceptions.  Once we had
them, the complaints of that type disappeared.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Couple of issues with prepared FETCH commands

2017-01-11 Thread Robert Haas
On Tue, Jan 10, 2017 at 5:38 PM, Andrew Gierth
 wrote:
> But the problem that actually came up is this: if you do the PQprepare
> before the named cursor has actually been opened, then everything works
> _up until_ the first event, such as a change to search_path, that forces
> a revalidation; and at that point it fails with the "must not change
> result type" error _even if_ the cursor always has exactly the same
> result type.  This happens because the initial prepare actually stored
> NULL for plansource->resultDesc, since the cursor name wasn't found,
> while on the revalidate, when the cursor obviously does exist, it gets
> the actual result type.
>
> It seems a bit of a "gotcha" to have it fail in this case when the
> result type isn't actually being checked in other cases.

To me, that sounds like a bug.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-11 Thread Pavel Stehule
2017-01-11 20:53 GMT+01:00 Merlin Moncure :

> On Wed, Jan 11, 2017 at 11:11 AM, Peter Eisentraut
>  wrote:
> > The current syntax was chosen because it is SQL-compatible.  Adding
> > redundant syntax to save a few characters without any new functionality
> > (performance, resource usage, safety, etc.) is a weak argument in the
> > overall scheme of things.
>
> Yeah -- exactly.  The few minor things that are not 100% SQL
> compatible I find to be major headaches.  Incompatible usage of INTO
> for example.
>

We not designed INTO usage in plpgsql - it is PL/SQL heritage.

PL/SQL = ADA + Oracle SQL; -- but sometimes the result is not perfect - Ada
was not designed be integrated with SQL


>
> This thread has been going on for quite some time now and is starting
> to become somewhat circular.   Perhaps we ought to organize the
> various ideas and pain points presented in a wiki along with
> conclusions, and in some cases if there is no solution that is
> compatible with the current syntax.
>

There is a language that is much better integrated with SQL - SQL/PSM

http://postgres.cz/wiki/SQL/PSM_Manual

It is less verbose, but still verbose language. It is static typed language
- so it can be bad for some people.

But due design based on SQL integration from base, there is less conflicts
between SQL and PL.

Regards

Pavel


> merlin
>


Re: [HACKERS] WIP: [[Parallel] Shared] Hash

2017-01-11 Thread Peter Geoghegan
On Wed, Jan 11, 2017 at 11:20 AM, Peter Geoghegan  wrote:
>> If multiple processes are using the same file via the BufFile
>> interface, I think that it is absolutely necessary that there should
>> be a provision to track the "attach count" of the BufFile.  Each
>> process that reaches EOXact decrements the attach count and when it
>> reaches 0, the process that reduced it to 0 removes the BufFile.  I
>> think anything that's based on the notion that leaders will remove
>> files and workers won't is going to be fragile and limiting, and I am
>> going to push hard against any such proposal.
>
> Okay. My BufFile unification approach happens to assume that backends
> clean up after themselves, but that isn't a ridged assumption (of
> course, these are always temp files, so we reason about them as temp
> files).

Also, to be clear, and to avoid confusion: I don't think anyone wants
an approach "that's based on the notion that leaders will remove files
and workers won't". All that has been suggested is that the backend
that creates the file should be responsible for deleting the file, by
definition. And, that any other backend that may have files owned by
another backend must be sure to not try to access them after the owner
deletes them. (Typically, that would be ensured by some barrier
condition, some dependency, inherent to how the parallel operation is
implemented.)

I will implement the reference count thing.
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New SQL counter statistics view (pg_stat_sql)

2017-01-11 Thread Robert Haas
On Wed, Jan 11, 2017 at 9:17 AM, Dilip Kumar  wrote:
> On Mon, Dec 5, 2016 at 8:01 AM, Haribabu Kommi  
> wrote:
>> Moved to next CF with "needs review" status.
>
> I have started reviewing the patch, Some initial comments.
>
> $ git apply ../pg_stat_sql_row_mode_2.patch
> error: patch failed: src/include/catalog/pg_proc.h:2893
> error: src/include/catalog/pg_proc.h: patch does not apply

I suggest using 'patch' directly.  'git apply' is far too picky.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: [[Parallel] Shared] Hash

2017-01-11 Thread Robert Haas
On Wed, Jan 11, 2017 at 2:20 PM, Peter Geoghegan  wrote:
> You'd probably still want to throw an error when workers ended up not
> deleting BufFile segments they owned, though, at least for parallel
> tuplesort.

Don't see why.

> This idea is something that's much more limited than the
> SharedTemporaryFile() API that you sketched on the parallel sort
> thread, because it only concerns resource management, and not how to
> make access to the shared file concurrency safe in any special,
> standard way.

Actually, I only intended that sketch to be about resource management.
Sounds like I didn't explain very well.

> Instead, they should be passing around some kind of minimal
> private-to-buffile state in shared memory that coordinates backends
> participating in BufFile unification. Private state created by
> buffile.c, and passed back to buffile.c. Everything should be
> encapsulated within buffile.c, IMV, making parallel implementations as
> close as possible to their serial implementations.

That seems reasonable although I haven't studied the details carefully as yet.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Packages: Again

2017-01-11 Thread Stephen Frost
Fabien,

* Fabien COELHO (coe...@cri.ensmp.fr) wrote:
> >I think we need to focus on things that _can't_ be done first, rather
> >than things that require porting, e.g. until we had savepoints, you
> >couldn't migrate an application that needed it.  It wasn't a question of
> >porting --- there was just no way to port it.
> >
> >Those _missing_ pieces should be a priority.
> 
> Nested/autonomous transactions? Do they occur often in PL/SQL code?

Yes.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Packages: Again

2017-01-11 Thread Fabien COELHO



I think we need to focus on things that _can't_ be done first, rather
than things that require porting, e.g. until we had savepoints, you
couldn't migrate an application that needed it.  It wasn't a question of
porting --- there was just no way to port it.

Those _missing_ pieces should be a priority.


Nested/autonomous transactions? Do they occur often in PL/SQL code?

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Packages: Again

2017-01-11 Thread Pavel Stehule
2017-01-11 20:42 GMT+01:00 Bruce Momjian :

> On Wed, Jan 11, 2017 at 08:32:53PM +0100, Pavel Stehule wrote:
> > Now I work on migration about 500K rows - and it is terrible work. It is
> 20
> > years old project - lot of code is not clean, It is hard to migrate, it
> is hard
> > to clean. Sure, there is not one line of tests.
> >
> > If we miss some, then it is modern robust tool for migration - big
> thanks to
> > ora2pg maintainers and developers - without it, there is nothing free.
>
> I think we need to focus on things that _can't_ be done first, rather
> than things that require porting, e.g. until we had savepoints, you
> couldn't migrate an application that needed it.  It wasn't a question of
> porting --- there was just no way to port it.
>
> Those _missing_ pieces should be a priority.
>

There are some workarounds for emulation of package variables - and I am
pressing a design of session variables that can be well used like package
variables.

Currently almost all basic functionality (related to PL/SQL) is available
in Postgres. But the migration needs lot of hard manual work.

It is analogy with Cobol systems - some applications are too ugly so more
level emulation is cheaper solution, than migration to some modern :)

Regards

Pavel


>
> --
>   Bruce Momjian  http://momjian.us
>   EnterpriseDB http://enterprisedb.com
>
> + As you are, so once was I.  As I am, so you will be. +
> +  Ancient Roman grave inscription +
>


Re: [HACKERS] Packages: Again

2017-01-11 Thread Fabien COELHO



We have a schemas instead - the PostgreSQL schema is close to Oracle
packages.


Yes, a schema is a kind of a "namespace"-level package. Pg also has 
extensions, which is a group things put together, which may also 
contribute to packaging.



What we cannot to substitute are package variables, now - see my proposal
for session variables.


I would like also to point out here that Pg has dynamic text session 
variables with a horrible syntax, aka user-defined GUCs. They can be the 
basis for more useful variables if extended with privacy/some access 
control, typing, better syntax, possibly some kind of persistent 
declarations, and so on.



[...]


Good luck with your migration...

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for changes to recovery.conf API

2017-01-11 Thread Peter Eisentraut
On 1/11/17 12:53 PM, Simon Riggs wrote:
>> I'm concerned that having signal files somewhere else opens up a bunch
>> more edge cases that need to be considered.  For example, what if
>> someone puts a signal file into a temporary directory that is cleared
>> after a server crash and restart.  That can mess up a bunch of things.
> 
> We already have trigger_file as a way to specify an alternate
> directory, so if I remove signal_file_directory I will need to replace
> trigger_file as a parameter.

The trigger file is for promotion, which is just a momentary nudge to
the server.  The recovery signal file or whatever we end up calling it
is long-term state, because if it disappears and the server restarts, it
will do something different.  So they are different.

I'm not strictly opposed to making the name or directory configurable,
but I'm predicting it will raise more questions than we might want to
deal with.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-11 Thread Merlin Moncure
On Wed, Jan 11, 2017 at 11:11 AM, Peter Eisentraut
 wrote:
> The current syntax was chosen because it is SQL-compatible.  Adding
> redundant syntax to save a few characters without any new functionality
> (performance, resource usage, safety, etc.) is a weak argument in the
> overall scheme of things.

Yeah -- exactly.  The few minor things that are not 100% SQL
compatible I find to be major headaches.  Incompatible usage of INTO
for example.

This thread has been going on for quite some time now and is starting
to become somewhat circular.   Perhaps we ought to organize the
various ideas and pain points presented in a wiki along with
conclusions, and in some cases if there is no solution that is
compatible with the current syntax.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Packages: Again

2017-01-11 Thread Bruce Momjian
On Wed, Jan 11, 2017 at 08:32:53PM +0100, Pavel Stehule wrote:
> Now I work on migration about 500K rows - and it is terrible work. It is 20
> years old project - lot of code is not clean, It is hard to migrate, it is 
> hard
> to clean. Sure, there is not one line of tests.
> 
> If we miss some, then it is modern robust tool for migration - big thanks to
> ora2pg maintainers and developers - without it, there is nothing free.

I think we need to focus on things that _can't_ be done first, rather
than things that require porting, e.g. until we had savepoints, you
couldn't migrate an application that needed it.  It wasn't a question of
porting --- there was just no way to port it.

Those _missing_ pieces should be a priority.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Packages: Again

2017-01-11 Thread Pavel Stehule
2017-01-11 19:57 GMT+01:00 Joshua D. Drake :

> -hackers,
>
> I know we have talked about this before but today it was impressed upon me
> rather firmly. I presented a Webinar: Postgres for Oracle People. The
> attendees were 90% pl/pgsql developers. 330 people registered for an event
> that was only allowed to host 100 people. The webinar went on for 2 hours.
> (it was only scheduled for one hour, that is how interactive it was)
>
> By far the tagline of this webinar from attendees was, "We can not port
> without packages"
>
> So this is a reality. If we want tried and true Oracle developers to port
> to PostgreSQL, we must provide some level of package capability.
>
> There are some that would say we don't need them. You are right, we don't
> need them. We should however want them if we want to continue to stomp
> through the business sector and continue growth.
>

We have a schemas instead - the PostgreSQL schema is close to Oracle
packages.

What we cannot to substitute are package variables, now - see my proposal
for session variables.

Now I am working on migration some large Oracle project - I see more
significant issues

1. no good tools - ora2pg do lot of work, but the PL/SQL -> PL/pgSQL
migration support is basic
2. some things in Postgres are different - boolean type, enum types, date
type, OUT parameters ..
3. some things are really different - NULL versus empty string
4. there are not good tools for postprocessing PL/pgSQL beautifier
(formatter), SQL formatter
5. The developers still using Oracle outer joins - there are not 100%
automatic migration
6. missing some common patterns for deployment, tests for really big set of
code.

Now I work on migration about 500K rows - and it is terrible work. It is 20
years old project - lot of code is not clean, It is hard to migrate, it is
hard to clean. Sure, there is not one line of tests.

If we miss some, then it is modern robust tool for migration - big thanks
to ora2pg maintainers and developers - without it, there is nothing free.

Regards

Pavel


>
> I use this post to inspire conversation on how we can get this done.
>
> Sincerely,
>
> JD
>
> --
> Command Prompt, Inc.  http://the.postgres.company/
> +1-503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
> Everyone appreciates your honesty, until you are honest with them.
> Unless otherwise stated, opinions are my own.
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] Packages: Again

2017-01-11 Thread Bruce Momjian
On Wed, Jan 11, 2017 at 10:57:58AM -0800, Joshua Drake wrote:
> -hackers,
> 
> I know we have talked about this before but today it was impressed upon me
> rather firmly. I presented a Webinar: Postgres for Oracle People. The
> attendees were 90% pl/pgsql developers. 330 people registered for an event
> that was only allowed to host 100 people. The webinar went on for 2 hours.
> (it was only scheduled for one hour, that is how interactive it was)
> 
> By far the tagline of this webinar from attendees was, "We can not port
> without packages"
> 
> So this is a reality. If we want tried and true Oracle developers to port to
> PostgreSQL, we must provide some level of package capability.
> 
> There are some that would say we don't need them. You are right, we don't
> need them. We should however want them if we want to continue to stomp
> through the business sector and continue growth.
> 
> I use this post to inspire conversation on how we can get this done.

So, we have a TODO item with links:

Add features of Oracle-style packages

A package would be a schema with session-local variables,
public/private functions, and initialization functions. It is also
possible to implement these capabilities in any schema and not use a
separate "packages" syntax at all.

proposal for PL packages for 8.3.
proposal: schema PL session variables
proposal: session server side variables 

Is there anything that needs updating there, or it is just a question of
getting someone to implement it?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: [[Parallel] Shared] Hash

2017-01-11 Thread Peter Geoghegan
On Wed, Jan 11, 2017 at 10:57 AM, Robert Haas  wrote:
> On Tue, Jan 10, 2017 at 8:56 PM, Peter Geoghegan  wrote:
>> Instead of all this, I suggest copying some of my changes to fd.c, so
>> that resource ownership within fd.c differentiates between a vfd that
>> is owned by the backend in the conventional sense, including having a
>> need to delete at eoxact, as well as a lesser form of ownership where
>> deletion should not happen.
>
> If multiple processes are using the same file via the BufFile
> interface, I think that it is absolutely necessary that there should
> be a provision to track the "attach count" of the BufFile.  Each
> process that reaches EOXact decrements the attach count and when it
> reaches 0, the process that reduced it to 0 removes the BufFile.  I
> think anything that's based on the notion that leaders will remove
> files and workers won't is going to be fragile and limiting, and I am
> going to push hard against any such proposal.

Okay. My BufFile unification approach happens to assume that backends
clean up after themselves, but that isn't a ridged assumption (of
course, these are always temp files, so we reason about them as temp
files). It could be based on a refcount fairly easily, such that, as
you say here, deletion of files occurs within workers (that "own" the
files) only as a consequence of their being the last backend with a
reference, that must therefore "turn out the lights" (delete the
file). That seems consistent with what I've done within fd.c, and what
I suggested to Thomas (that he more or less follow that approach).
You'd probably still want to throw an error when workers ended up not
deleting BufFile segments they owned, though, at least for parallel
tuplesort.

This idea is something that's much more limited than the
SharedTemporaryFile() API that you sketched on the parallel sort
thread, because it only concerns resource management, and not how to
make access to the shared file concurrency safe in any special,
standard way. I think that this resource management is something that
should be managed by buffile.c (and the temp file routines within fd.c
that are morally owned by buffile.c, their only caller). It shouldn't
be necessary for a client of this new infrastructure, such as parallel
tuplesort or parallel hash join, to know anything about file paths.
Instead, they should be passing around some kind of minimal
private-to-buffile state in shared memory that coordinates backends
participating in BufFile unification. Private state created by
buffile.c, and passed back to buffile.c. Everything should be
encapsulated within buffile.c, IMV, making parallel implementations as
close as possible to their serial implementations.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] An isolation test for SERIALIZABLE READ ONLY DEFERRABLE

2017-01-11 Thread Robert Haas
On Tue, Jan 10, 2017 at 11:41 PM, Michael Paquier
 wrote:
> On Thu, Jan 5, 2017 at 6:41 AM, Thomas Munro
>  wrote:
>> It's a bit of a strange case: we're indirectly waiting for other
>> backends' transactions to end, but it's not exactly a lock or even a
>> predicate lock: it's waiting for a time when we could run safely with
>> predicate locking disabled.  So it's not at all clear that
>> pg_blocking_pids should try to get its hands on the appropriate pids
>> (or if it even could).  Hence my attempt to do this using our
>> wonderful wait introspection.
>>
>> I don't think putting the particular wait_event name into the test
>> spec would be very useful.  It's really there as a whitelist to be
>> conservative about excluding random waits caused by concurrent
>> autovacuum etc.  It just happens to have only one thing in the
>> whitelist for now, and I'm not sure what else would ever go in it...
>
> Do you think that expanding the wait query by default could be
> intrusive for the other tests? I am wondering about such a white list
> to generate false positives for the existing tests, including
> out-of-core extensions, as all the tests now rely only on
> pg_blocking_pids().

It won't affect anything unless running at transaction isolation level
serializable with the "read only deferrable" option.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: [[Parallel] Shared] Hash

2017-01-11 Thread Robert Haas
On Tue, Jan 10, 2017 at 8:56 PM, Peter Geoghegan  wrote:
> Instead of all this, I suggest copying some of my changes to fd.c, so
> that resource ownership within fd.c differentiates between a vfd that
> is owned by the backend in the conventional sense, including having a
> need to delete at eoxact, as well as a lesser form of ownership where
> deletion should not happen.

If multiple processes are using the same file via the BufFile
interface, I think that it is absolutely necessary that there should
be a provision to track the "attach count" of the BufFile.  Each
process that reaches EOXact decrements the attach count and when it
reaches 0, the process that reduced it to 0 removes the BufFile.  I
think anything that's based on the notion that leaders will remove
files and workers won't is going to be fragile and limiting, and I am
going to push hard against any such proposal.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Packages: Again

2017-01-11 Thread Joshua D. Drake

-hackers,

I know we have talked about this before but today it was impressed upon 
me rather firmly. I presented a Webinar: Postgres for Oracle People. The 
attendees were 90% pl/pgsql developers. 330 people registered for an 
event that was only allowed to host 100 people. The webinar went on for 
2 hours. (it was only scheduled for one hour, that is how interactive it 
was)


By far the tagline of this webinar from attendees was, "We can not port 
without packages"


So this is a reality. If we want tried and true Oracle developers to 
port to PostgreSQL, we must provide some level of package capability.


There are some that would say we don't need them. You are right, we 
don't need them. We should however want them if we want to continue to 
stomp through the business sector and continue growth.


I use this post to inspire conversation on how we can get this done.

Sincerely,

JD

--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plan_rows confusion with parallel queries

2017-01-11 Thread Robert Haas
On Wed, Nov 2, 2016 at 10:54 PM, Tom Lane  wrote:
 I got confused by that a minute ago, so no you're not alone.  The problem
 is even worse in join cases.  For example:
  Gather  (cost=34332.00..53265.35 rows=100 width=8)
Workers Planned: 2
->  Hash Join  (cost=2.00..52255.35 rows=100 width=8)
  Hash Cond: ((pp.f1 = cc.f1) AND (pp.f2 = cc.f2))
  ->  Append  (cost=0.00..8614.96 rows=417996 width=8)
->  Parallel Seq Scan on pp  (cost=0.00..8591.67 
 rows=416667 width=8)
->  Parallel Seq Scan on pp1  (cost=0.00..23.29 rows=1329 
 width=8)
  ->  Hash  (cost=14425.00..14425.00 rows=100 width=8)
->  Seq Scan on cc  (cost=0.00..14425.00 rows=100 
 width=8)
>> Although - it is estimating 1M rows, but only "per worker" estimates are
>> shown, and because there are 2 workers planned it says 1M/2.4 which is
>> the 416k. I agree it's a bit unclear, but at least it's consistent with
>> how we treat loops (i.e. that the numbers are per loop).
>
> Well, it's not *that* consistent.  If we were estimating all the numbers
> underneath the Gather as being per-worker numbers, that would make some
> amount of sense.  But neither the other seqscan, nor the hash on it, nor
> the hashjoin's output count are scaled that way.  It's very hard to call
> the above display anything but flat-out broken.

While investigating why Rushabh Lathia's Gather Merge patch sometimes
fails to pick a Gather Merge plan even when it really ought to do so,
I ran smack into this problem.  I discovered that this is more than a
cosmetic issue.  The costing itself is actually badly broken.  In the
single-table case, when you have just ...

Gather
-> Parallel Seq Scan

...the Parallel Seq Scan node reflects a per-worker row estimate, and
the Gather node reflects a total row estimate.  But in the join case,
as shown above, the Gather thinks that the total number of rows which
it will produce is equal to the number that will be produced by one
single worker, which is crap, and the cost of doing the join in
parallel is based on the per-worker rather than the total number,
which is crappier.  The difference in cost between the Gather and the
underlying join in the above example is exactly 1010, namely 1000 for
parallel_setup_cost and 100 tuples at 0.1 per tuple, even though 100
is the number of tuples per-worker, not the total number.  That's
really not good.  I probably should have realized this when I looked
at this thread the first time, but I somehow got it into my head that
this was just a complaint about the imperfections of the display
(which is indeed imperfect) and failed to realize that the same report
was also pointing to an actual costing bug.  I apologize for that.

The reason why this is happening is that final_cost_nestloop(),
final_cost_hashjoin(), and final_cost_mergejoin() don't care a whit
about whether the path they are generating is partial.  They apply the
row estimate for the joinrel itself to every such path generated for
the join, except for parameterized paths which are a special case.  I
think this generally has the effect of discouraging parallel joins,
because the inflated row count also inflates the join cost.  I think
the right thing to do is probably to scale the row count estimate for
the joinrel's partial paths by the leader_contribution value computed
in cost_seqscan.

Despite my general hatred of back-patching things that cause plan
changes, I'm inclined to think the fix for this should be back-patched
to 9.6, because this is really a brown-paper-bag bug.  If the
consensus is otherwise I will of course defer to that consensus.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for changes to recovery.conf API

2017-01-11 Thread Simon Riggs
Having already agreed to remove the two mentioned aspects, I'm just
replying to fill in some historical details.

On 11 January 2017 at 17:25, Peter Eisentraut
 wrote:
> On 1/11/17 5:27 AM, Simon Riggs wrote:
>> * Renaming primary_* parameters - Currently we use this config setting
>> even when connecting to a standby, so the parameter is confusingly
>> named, so 10.0 is a good chance to name it correctly. Will submit as
>> separate patch.
>
> I don't subscribe to the idea that 10.0 is a better chance to change
> something than any other time.
>
> I agree that the naming has become inaccurate, but it still matches the
> basic use case (one primary, one standby (heck, standby is also
> inaccurate now!)), and I don't recall anyone being confused by this.
> Also, it is debatable whether "sender" is better.  Yes, it's a sender,
> but sending what and to whom?

Sending server is the term already used to describe a server that is
either a master or a relaying standby.

But as already requested, I will remove from patch.

>> * Directory for signal files was in my understanding a primary goal of
>> the patch. I am happy to remove that into a later submission. That
>> resolves, for now, the issue with pg_basebackup -R.
>
> I think the issue was that some people didn't want configuration files
> in the data directory.  By removing recovery.conf we accomplish that.
> Signal/trigger files are not configuration (or at least it's much easier
> to argue that), so I think having them in the data directory is fine.

There were a considerable number of people that pushed to make the
data directory non-user writable, which is where the signal directory
came from.

I can see the argument, but since those that spoke previously have
evaporated, I'm easy.

> I'm concerned that having signal files somewhere else opens up a bunch
> more edge cases that need to be considered.  For example, what if
> someone puts a signal file into a temporary directory that is cleared
> after a server crash and restart.  That can mess up a bunch of things.

We already have trigger_file as a way to specify an alternate
directory, so if I remove signal_file_directory I will need to replace
trigger_file as a parameter.

> (I think I like trigger better than signal, btw.  A signal is something
> asynchronous.)

The code already calls them signal files, its just called trigger externally.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical Replication WIP

2017-01-11 Thread Peter Eisentraut
On 1/11/17 3:29 AM, Petr Jelinek wrote:
> Okay, looking into my notes, I originally did this because we did not
> allow adding tables without pkeys to publications which effectively
> prohibited FOR ALL TABLES publication from working because of
> information_schema without this. Since this is no longer the case I
> think it's safe to skip the FirstNormalObjectId check.

Wouldn't that mean that FOR ALL TABLES replicates the tables from
information_schema?

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical Replication WIP

2017-01-11 Thread Peter Eisentraut
On 1/11/17 3:11 AM, Petr Jelinek wrote:
> That will not help, issue is that we consider names for origins to be
> unique across cluster while subscription names are per database so if
> there is origin per subscription (which there has to be) it will always
> clash if we just use the name. I already have locally changed this to
> pg_ naming scheme and it works fine.

How will that make it unique across the cluster?

Should we include the system ID from pg_control?

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for changes to recovery.conf API

2017-01-11 Thread Peter Eisentraut
On 1/11/17 5:27 AM, Simon Riggs wrote:
> * Renaming primary_* parameters - Currently we use this config setting
> even when connecting to a standby, so the parameter is confusingly
> named, so 10.0 is a good chance to name it correctly. Will submit as
> separate patch.

I don't subscribe to the idea that 10.0 is a better chance to change
something than any other time.

I agree that the naming has become inaccurate, but it still matches the
basic use case (one primary, one standby (heck, standby is also
inaccurate now!)), and I don't recall anyone being confused by this.
Also, it is debatable whether "sender" is better.  Yes, it's a sender,
but sending what and to whom?

> * Directory for signal files was in my understanding a primary goal of
> the patch. I am happy to remove that into a later submission. That
> resolves, for now, the issue with pg_basebackup -R.

I think the issue was that some people didn't want configuration files
in the data directory.  By removing recovery.conf we accomplish that.
Signal/trigger files are not configuration (or at least it's much easier
to argue that), so I think having them in the data directory is fine.

I'm concerned that having signal files somewhere else opens up a bunch
more edge cases that need to be considered.  For example, what if
someone puts a signal file into a temporary directory that is cleared
after a server crash and restart.  That can mess up a bunch of things.

(I think I like trigger better than signal, btw.  A signal is something
asynchronous.)

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-11 Thread Peter Eisentraut
On 1/10/17 8:44 AM, Marko Tiikkaja wrote:
> On Tue, Jan 10, 2017 at 2:26 PM, Peter Eisentraut
>  > wrote:
> 
> It's not like PL/pgSQL is the king of brevity.  
> 
> 
> This is essentially saying "PL/PgSQL isn't perfect, so we shouldn't try
> and make it better".  I hear this argument a lot, and as long as people
> keep rejecting improvements for this reason they can keep saying it. 
> It's a self-fulfilling prophecy.

I'm not making that argument.  But if the plan here is that PL/pgSQL is
too verbose, let's make it less verbose, then maybe, but let's see a
more complete plan for that.

The current syntax was chosen because it is SQL-compatible.  Adding
redundant syntax to save a few characters without any new functionality
(performance, resource usage, safety, etc.) is a weak argument in the
overall scheme of things.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Questionable tag usage

2017-01-11 Thread Peter Eisentraut
On 1/6/17 8:56 AM, Magnus Hagander wrote:
> You could argue that nobody reads the PG docs on dead trees anymore
> and we should embrace the hyperlink style with enthusiasm.  I wouldn't
> be against that personally, but there are a lot of places to change if
> we decide that parenthetical "(see Section M.N)" hotlinks are pass
> ,Ai (B.

> I don't think there are a lto of people who use dead tree editions
> anymore, but they certainly do exist. A lot of people use the PDFs
> though, particularly for offline reading or loading them in ebook
> readers. So it still has to be workable there.

And there are man pages as the canonical example of why environments
without full hyperlinking are still useful.

Also, I'm not fond of the style of writing where random words are
hyperlinks, because then there is no context of what the link is for.
Is it more information, is it the canonical definition, is a glossary
entry, a link to Wikipedia, is it essential that I read the linked-to
material to be able to understand the current material, etc.  And for
mostly technical reasons, the links sometimes point into the middle of a
section, so it's hard to find what the link was supposed to help you
with, even more so if the target section was rewritten since the link
was placed.  The xref style of linking makes the relationship between
both ends of the link more explicit and keeps up with changes in either
side of the link better.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Rename pg_switch_xlog to pg_switch_wal

2017-01-11 Thread Vladimir Rusinov
On Tue, Jan 10, 2017 at 5:24 AM, Michael Paquier 
wrote:

> -errhint("pg_xlogfile_name_offset() cannot be executed
> during recovery.")));
> +errhint(
> +"pg_wal_file_name_offset() cannot be executed
> during recovery.")));
> I am not sure that there is any need to reformat this error hint.
>
> Your patch includes useless diff noise in pg_proc.h. By that I mean
> the lines of pg_start_backup, pg_stop_backup, etc.
>
> +/*  Next OID: 6016 */
> +
> I don't think you need that.
>

Reverted all of above.


>
> src/test/perl/PostgresNode.pm has added recently a new method called
> lsn() that uses some of the WAL position functions. Their update is
> necessary as well for this patch.
>

Good catch. Changed those as well.


> As there are two school of thoughts on this thread, keeping your patch
> with the compatibility table is the best move for now. Even if we end
> up by having a version without aliases, that will be just code to
> remove in the final version.
>

Indeed, it is trivial to kill aliases.

New version of the patch attached.

-- 
Vladimir Rusinov
Storage SRE, Google Ireland

Google Ireland Ltd.,Gordon House, Barrow Street, Dublin 4, Ireland
Registered in Dublin, Ireland
Registration Number: 368047
diff --git a/contrib/bloom/t/001_wal.pl b/contrib/bloom/t/001_wal.pl
index 56c6618d3d..dc907baf87 100644
--- a/contrib/bloom/t/001_wal.pl
+++ b/contrib/bloom/t/001_wal.pl
@@ -16,7 +16,7 @@ sub test_index_replay
 	# Wait for standby to catch up
 	my $applname = $node_standby->name;
 	my $caughtup_query =
-"SELECT pg_current_xlog_location() <= write_location FROM pg_stat_replication WHERE application_name = '$applname';";
+"SELECT pg_current_wal_location() <= write_location FROM pg_stat_replication WHERE application_name = '$applname';";
 	$node_master->poll_query_until('postgres', $caughtup_query)
 	  or die "Timed out while waiting for standby 1 to catch up";
 
diff --git a/contrib/test_decoding/expected/ddl.out b/contrib/test_decoding/expected/ddl.out
index c104c4802d..275c84a450 100644
--- a/contrib/test_decoding/expected/ddl.out
+++ b/contrib/test_decoding/expected/ddl.out
@@ -58,7 +58,7 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_d
 SELECT slot_name, plugin, slot_type, active,
 NOT catalog_xmin IS NULL AS catalog_xmin_set,
 xmin IS NULl  AS data_xmin_not_set,
-pg_xlog_location_diff(restart_lsn, '0/0100') > 0 AS some_wal
+pg_wal_location_diff(restart_lsn, '0/0100') > 0 AS some_wal
 FROM pg_replication_slots;
 slot_name|plugin | slot_type | active | catalog_xmin_set | data_xmin_not_set | some_wal 
 -+---+---++--+---+--
diff --git a/contrib/test_decoding/sql/ddl.sql b/contrib/test_decoding/sql/ddl.sql
index 89b8b8f780..49dad39b50 100644
--- a/contrib/test_decoding/sql/ddl.sql
+++ b/contrib/test_decoding/sql/ddl.sql
@@ -29,7 +29,7 @@ SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 'test_d
 SELECT slot_name, plugin, slot_type, active,
 NOT catalog_xmin IS NULL AS catalog_xmin_set,
 xmin IS NULl  AS data_xmin_not_set,
-pg_xlog_location_diff(restart_lsn, '0/0100') > 0 AS some_wal
+pg_wal_location_diff(restart_lsn, '0/0100') > 0 AS some_wal
 FROM pg_replication_slots;
 
 /*
diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml
index 6eaed1efbe..e37f101277 100644
--- a/doc/src/sgml/backup.sgml
+++ b/doc/src/sgml/backup.sgml
@@ -726,9 +726,9 @@ test ! -f /mnt/server/archivedir/000100A90065  cp pg_wal/0
 

 Also, you can force a segment switch manually with
-pg_switch_xlog if you want to ensure that a
-just-finished transaction is archived as soon as possible.  Other utility
-functions related to WAL management are listed in pg_switch_wal if you want to ensure that a just-finished
+transaction is archived as soon as possible.  Other utility functions
+related to WAL management are listed in .

 
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 10e31868ba..a5bfb4b306 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -17925,13 +17925,13 @@ SELECT set_config('log_statement_stats', 'off', false);
 pg_create_restore_point


-pg_current_xlog_flush_location
+pg_current_wal_flush_location


-pg_current_xlog_insert_location
+pg_current_wal_insert_location


-pg_current_xlog_location
+pg_current_wal_location


 pg_start_backup
@@ -17946,24 +17946,25 @@ SELECT set_config('log_statement_stats', 'off', false);
 pg_backup_start_time


-pg_switch_xlog
+pg_switch_wal


-pg_xlogfile_name
+pg_wal_file_name


-pg_xlogfile_name_offset
+pg_wal_file_name_offset


-pg_xlog_location_diff
+pg_wal_location_diff

 

 The 

Re: [HACKERS] PATCH: recursive json_populate_record()

2017-01-11 Thread Nikita Glukhov

On 01/08/2017 09:52 PM, Tom Lane wrote:


The example you quoted at the start of the thread doesn't fail for me
in HEAD, so I surmise that it's falling foul of some assertion you added
in the 0001 patch, but if so I think that assertion is wrong.  attndims
is really syntactic sugar only and doesn't affect anything meaningful
semantically.  Here is an example showing why it shouldn't:

regression=# create table foo (d0 _int4, d1 int[], d2 int[3][4]);
CREATE TABLE
regression=# select attname,atttypid,attndims from pg_attribute where attrelid = 
'foo'::regclass and attnum > 0;
  attname | atttypid | attndims
-+--+--
  d0  | 1007 |0
  d1  | 1007 |1
  d2  | 1007 |2
(3 rows)

Columns d0,d1,d2 are really all of the same type, and any code that
treats d0 and d1 differently is certainly broken.



Thank you for this example with raw _int4 type.  I didn't expect that
attndims can legally be zero.  There was really wrong assertion "ndims > 0"
that was necessary because I used attndims for verification of a
number of dimensions of a populated json array.

I have fixed the first patch: when the number of dimensionsis unknown
we determine it simply by the number of successive opening brackets at
the start of a json array.  But I'm still using for verification non-zero
ndims values that can be get from composite type attribute (attndims) or
from domain array type (typndims) through specially added function 
get_type_ndims().


On 01/08/2017 09:52 PM, Tom Lane wrote:


I do not see the point of the second one of these, and it adds no test
case showing why it would be needed.

I also have added special test cases for json_to_record() showing difference
in behavior that the second patch brings in (see changes in json.out and 
jsonb.out).


--
Nikita Glukhov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 10e3186..55cacfb 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -11249,12 +11249,12 @@ table2-mapping
  whose columns match the record type defined by base
  (see note below).

-   select * from json_populate_record(null::myrowtype, '{"a":1,"b":2}')
+   select * from json_populate_record(null::myrowtype, '{"a": 1, "b": ["2", "a b"], "c": {"d": 4, "e": "a  b c"}}')

 
- a | b
+---
- 1 | 2
+ a |   b   |  c
+---+---+-
+ 1 | {2,"a b"} | (4,"a b c")
 

   
@@ -11343,12 +11343,12 @@ table2-mapping
  explicitly define the structure of the record with an AS
  clause.

-   select * from json_to_record('{"a":1,"b":[1,2,3],"c":"bar"}') as x(a int, b text, d text) 
+   select * from json_to_record('{"a":1,"b":[1,2,3],"c":[1,2,3],"e":"bar","r": {"a": 123, "b": "a b c"}}') as x(a int, b text, c int[], d text, r myrowtype) 

 
- a |b| d
+-+---
- 1 | [1,2,3] |
+ a |b|c| d |   r
+---+-+-+---+---
+ 1 | [1,2,3] | {1,2,3} |   | (123,"a b c")
 

   
diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c
index 17ee4e4..04959cb 100644
--- a/src/backend/utils/adt/jsonfuncs.c
+++ b/src/backend/utils/adt/jsonfuncs.c
@@ -93,7 +93,7 @@ static void elements_array_element_end(void *state, bool isnull);
 static void elements_scalar(void *state, char *token, JsonTokenType tokentype);
 
 /* turn a json object into a hash table */
-static HTAB *get_json_object_as_hash(text *json, const char *funcname);
+static HTAB *get_json_object_as_hash(char *json, int len, const char *funcname);
 
 /* common worker for populate_record and to_record */
 static Datum populate_record_worker(FunctionCallInfo fcinfo, const char *funcname,
@@ -149,6 +149,33 @@ static void setPathArray(JsonbIterator **it, Datum *path_elems,
 			 int level, Jsonb *newval, uint32 nelems, int op_type);
 static void addJsonbToParseState(JsonbParseState **jbps, Jsonb *jb);
 
+/* helper functions for populate_record[set] */
+typedef struct ColumnIOData ColumnIOData;
+typedef struct RecordIOData RecordIOData;
+
+static HeapTupleHeader
+populate_record(TupleDesc		tupdesc,
+RecordIOData  **record_info,
+HeapTupleHeader	template,
+MemoryContext	mcxt,
+Oidjtype,
+HTAB	 	   *json_hash,
+JsonbContainer *cont);
+
+static Datum
+populate_record_field(ColumnIOData *col,
+	  Oid			typid,
+	  int32			typmod,
+	  int32			ndims,
+	  const char   *colname,
+	  MemoryContext	mcxt,
+	  Datum			defaultval,
+	  Oid			jtype,
+	  char		   *json,
+	  bool			json_is_string,
+	  JsonbValue   *jval,
+	  bool		   *isnull);
+
 /* state for json_object_keys */
 typedef struct OkeysState
 {
@@ -216,6 +243,7 @@ typedef struct JhashState
 	HTAB	   *hash;
 	char	   *saved_scalar;
 	char	   *save_json_start;
+	bool		saved_scalar_is_string;
 } 

Re: [HACKERS] Questionable tag usage

2017-01-11 Thread Peter Eisentraut
On 1/4/17 11:50 PM, Tom Lane wrote:
> Anyway, bottom line is I'm not terribly excited about fixing just this
> one place.  I think we need to decide whether we like the new more-verbose
> output for links.  If we don't, we need to fix the markup rules to not do
> that.  If we do, there are a lot of places that need adjustment to be less
> duplicative, and we should try to be somewhat systematic about fixing
> them.

We can turn off the new link appearance and change it back to the old
one.  We had previously speculated that this change might be awkward in
some cases, but without any concrete cases.  If we have concrete cases,
we can change it.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore accepts -j -1

2017-01-11 Thread Stephen Frost
Ashutosh,

* Stephen Frost (sfr...@snowman.net) wrote:
> Attached patch adds the same check to pg_restore that's in pg_dump
> already.  Looks like it should back-patch to 9.3 pretty cleanly and I'll
> add a similar check for 9.2.

After playing with this, it seems entirely wrong to wait until after we
try to open the archive before we finish checking the validity of the
options, so I moved these checks up to a more sensible location.

> Any thoughts about adding the Windows-specific MAXIMUM_WAIT_OBJECTS
> check to 9.2 pg_restore..?  It certainly looks entirely straight-forward
> to do, and though I don't recall hearing anyone complaining about trying
> to run pg_restore on Windows with lots of jobs and having it fall over,
> it might avoid a bit of frustration for anyone who does try.

The more I think about it, the more it seems like this is something we
should have back-patched when we added that check, so I'll go ahead and
add that check to 9.2 also.

Updated patch attached.

I'll plan to push these changes later on this afternoon.

Thanks!

Stephen
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
new file mode 100644
index b21fd26..239b0d8
*** a/src/bin/pg_dump/pg_restore.c
--- b/src/bin/pg_dump/pg_restore.c
*** main(int argc, char **argv)
*** 330,335 
--- 330,351 
  		exit_nicely(1);
  	}
  
+ 	if (numWorkers <= 0)
+ 	{
+ 		fprintf(stderr, _("%s: invalid number of parallel jobs\n"), progname);
+ 		exit(1);
+ 	}
+ 
+ 	/* See comments in pg_dump.c */
+ #ifdef WIN32
+ 	if (numWorkers > MAXIMUM_WAIT_OBJECTS)
+ 	{
+ 		fprintf(stderr, _("%s: maximum number of parallel jobs is %d\n"),
+ progname, MAXIMUM_WAIT_OBJECTS);
+ 		exit(1);
+ 	}
+ #endif
+ 
  	/* Can't do single-txn mode with multiple connections */
  	if (opts->single_txn && numWorkers > 1)
  	{
*** main(int argc, char **argv)
*** 402,417 
  	if (opts->tocFile)
  		SortTocFromFile(AH);
  
- 	/* See comments in pg_dump.c */
- #ifdef WIN32
- 	if (numWorkers > MAXIMUM_WAIT_OBJECTS)
- 	{
- 		fprintf(stderr, _("%s: maximum number of parallel jobs is %d\n"),
- progname, MAXIMUM_WAIT_OBJECTS);
- 		exit(1);
- 	}
- #endif
- 
  	AH->numWorkers = numWorkers;
  
  	if (opts->tocSummary)
--- 418,423 


signature.asc
Description: Digital signature


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-11 Thread Pavel Stehule
2017-01-11 15:37 GMT+01:00 Merlin Moncure :

> On Tue, Jan 10, 2017 at 7:44 AM, Marko Tiikkaja  wrote:
> > On Tue, Jan 10, 2017 at 2:26 PM, Peter Eisentraut
> >  wrote:
> >>
> >> It's not like PL/pgSQL is the king of brevity.
> >
> >
> > This is essentially saying "PL/PgSQL isn't perfect, so we shouldn't try
> and
> > make it better".  I hear this argument a lot, and as long as people keep
> > rejecting improvements for this reason they can keep saying it.  It's a
> > self-fulfilling prophecy.
>
> Agreed.  But adding language features, especially syntactical ones,
> demands prudence; there is good reason to limit keywords like that.
> What about:
>
pgsql.rows
> pgsql.found
> pgsql.sqlerrm
> etc
> as automatic variables (I think this was suggested upthread).
> Conflicts with existing structures is of course an issue but I bet it
> could be worked out.
>

Any implicit namespace can be problem. But we can continue in default
unlabeled namespace for auto variables with possibility to specify this
namespace explicitly.

Regards

Pavel


Re: [HACKERS] WARM and indirect indexes

2017-01-11 Thread Bruce Momjian
On Wed, Jan 11, 2017 at 12:24:55PM -0300, Alvaro Herrera wrote:
> Bruce Momjian wrote:
> 
> > Therefore, I think we need WARM done first, then we can test indirect
> > indexes to see if they are a sufficient win to add it for the small
> > percentage of users who will use it.
> 
> Agreed -- that's my plan.

Thanks.  I hate to pour cold water on a feature, honestly, but I don't
want us to over-react to the write amplification problem either.  We
don't want to have X features to improve it when a single feature is
sufficient.  (Amit might be right that the real win for indirect indexes
will be some type of clustered index, where the win might be larger.)

I know Uber dumped us "allegedly" over this issue (among other
complaints), but I am concerned we are overreacting if we change
Postgres too much to address this concern.  Hence, I am arguing we don't
add both features at the same time without evaluating the need for the
second feature after the first feature is done.

Let me give an example of us not over-reacting.  When we implemented HOT
(thanks Pavan), we considered the problem that our default fill factor
for heap is 100%, so there is no room for HOT updates on a full page.
Should we reduce the default fill factor when adding HOT?  We decided
not to, on the hope that the first update to a row on a full page would
put the new row on a page with sufficient free space for future HOT
updates, and that has proven to be the case.

We do document lower full factors for heap to improve HOT updates, but I
think everyone feels that is mostly for good performance after the
initial table load, and that over time the frequently-updated rows will
naturally migrate to pages with sufficient free space.

My point is that we didn't over-react in that case, and the result was
fine --- read-only rows got dense storage, and frequently-updated rows
got sufficient free space for HOT, but we had to push HOT into
production to confirm we were in good shape.  I am thinking we need to
complete WARM to figure out what it doesn't do well in production so we
can fairly evaluate indirect indexes.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: Clarifying "server starting" messaging in pg_ctl start without --wait

2017-01-11 Thread Ryan Murphy
Thanks for the review Beena, I'm glad the patch is ready to go!

I think because of my environment/setup, I get errors when I try "make
install-world", but I'm at work now, when I have time I will go back and
try again and figure out what is wrong.  I'll let you guys know if I have
any questions.

Take care,
Ryan


[HACKERS] many copies of atooid() and oid_cmp()

2017-01-11 Thread Peter Eisentraut
There are approximately 11 copies of atooid() and 3 of oid_cmp() or
equivalent, and pending patches are proposing to add more.  I propose
these two patches to collect them in central places.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From 6f30dc0b653b5b19f9d3ae29788fc922848b52e5 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Wed, 11 Jan 2017 12:00:00 -0500
Subject: [PATCH 1/2] Move atooid() definition to a central place

---
 contrib/lo/lo.c   | 2 --
 contrib/vacuumlo/vacuumlo.c   | 2 --
 src/backend/bootstrap/bootparse.y | 2 --
 src/backend/libpq/hba.c   | 3 ---
 src/backend/nodes/readfuncs.c | 2 --
 src/backend/utils/adt/misc.c  | 2 --
 src/bin/pg_basebackup/pg_basebackup.c | 2 --
 src/bin/pg_upgrade/pg_upgrade.h   | 2 --
 src/bin/psql/common.h | 2 --
 src/bin/scripts/droplang.c| 2 --
 src/include/fe_utils/string_utils.h   | 2 --
 src/include/postgres_ext.h| 4 
 12 files changed, 4 insertions(+), 23 deletions(-)

diff --git a/contrib/lo/lo.c b/contrib/lo/lo.c
index 953659305f..805514d97a 100644
--- a/contrib/lo/lo.c
+++ b/contrib/lo/lo.c
@@ -14,8 +14,6 @@
 
 PG_MODULE_MAGIC;
 
-#define atooid(x)  ((Oid) strtoul((x), NULL, 10))
-
 
 /*
  * This is the trigger that protects us from orphaned large objects
diff --git a/contrib/vacuumlo/vacuumlo.c b/contrib/vacuumlo/vacuumlo.c
index d9dee2f7a0..06f469067b 100644
--- a/contrib/vacuumlo/vacuumlo.c
+++ b/contrib/vacuumlo/vacuumlo.c
@@ -24,8 +24,6 @@
 #include "libpq-fe.h"
 #include "pg_getopt.h"
 
-#define atooid(x)  ((Oid) strtoul((x), NULL, 10))
-
 #define BUFSIZE			1024
 
 enum trivalue
diff --git a/src/backend/bootstrap/bootparse.y b/src/backend/bootstrap/bootparse.y
index d28af23c94..23bdb5c759 100644
--- a/src/backend/bootstrap/bootparse.y
+++ b/src/backend/bootstrap/bootparse.y
@@ -51,8 +51,6 @@
 #include "tcop/dest.h"
 #include "utils/rel.h"
 
-#define atooid(x)	((Oid) strtoul((x), NULL, 10))
-
 
 /*
  * Bison doesn't allocate anything that needs to live across parser calls,
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 07f046fd8b..91c5a49de5 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -47,9 +47,6 @@
 #endif
 
 
-#define atooid(x)  ((Oid) strtoul((x), NULL, 10))
-#define atoxid(x)  ((TransactionId) strtoul((x), NULL, 10))
-
 #define MAX_TOKEN	256
 #define MAX_LINE	8192
 
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index dc40d0181e..52b6cc9ffc 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -164,8 +164,6 @@
  */
 #define atoui(x)  ((unsigned int) strtoul((x), NULL, 10))
 
-#define atooid(x)  ((Oid) strtoul((x), NULL, 10))
-
 #define strtobool(x)  ((*(x) == 't') ? true : false)
 
 #define nullable_string(token,length)  \
diff --git a/src/backend/utils/adt/misc.c b/src/backend/utils/adt/misc.c
index 66d09bcb0c..42221da128 100644
--- a/src/backend/utils/adt/misc.c
+++ b/src/backend/utils/adt/misc.c
@@ -44,8 +44,6 @@
 #include "utils/builtins.h"
 #include "utils/timestamp.h"
 
-#define atooid(x)  ((Oid) strtoul((x), NULL, 10))
-
 
 /*
  * Common subroutine for num_nulls() and num_nonnulls().
diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c
index 8ebf24e771..f8f1133a1e 100644
--- a/src/bin/pg_basebackup/pg_basebackup.c
+++ b/src/bin/pg_basebackup/pg_basebackup.c
@@ -40,8 +40,6 @@
 #include "streamutil.h"
 
 
-#define atooid(x)  ((Oid) strtoul((x), NULL, 10))
-
 typedef struct TablespaceListCell
 {
 	struct TablespaceListCell *next;
diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h
index 42e7aebb01..524709a118 100644
--- a/src/bin/pg_upgrade/pg_upgrade.h
+++ b/src/bin/pg_upgrade/pg_upgrade.h
@@ -97,8 +97,6 @@ extern char *output_files[];
 #define CLUSTER_NAME(cluster)	((cluster) == _cluster ? "old" : \
  (cluster) == _cluster ? "new" : "none")
 
-#define atooid(x)  ((Oid) strtoul((x), NULL, 10))
-
 /* OID system catalog preservation added during PG 9.0 development */
 #define TABLE_SPACE_SUBDIRS_CAT_VER 20100
 /* postmaster/postgres -b (binary_upgrade) flag added during PG 9.1 development */
diff --git a/src/bin/psql/common.h b/src/bin/psql/common.h
index dad0eb822a..a83bc69ab7 100644
--- a/src/bin/psql/common.h
+++ b/src/bin/psql/common.h
@@ -13,8 +13,6 @@
 #include "libpq-fe.h"
 #include "fe_utils/print.h"
 
-#define atooid(x)  ((Oid) strtoul((x), NULL, 10))
-
 extern bool openQueryOutputFile(const char *fname, FILE **fout, bool *is_pipe);
 extern bool setQFout(const char *fname);
 
diff --git a/src/bin/scripts/droplang.c b/src/bin/scripts/droplang.c
index ab0215d495..97d1de43ae 100644
--- a/src/bin/scripts/droplang.c
+++ b/src/bin/scripts/droplang.c
@@ -14,8 +14,6 @@
 #include "common.h"
 #include "fe_utils/print.h"
 
-#define atooid(x)  ((Oid) 

Re: [HACKERS] Logical replication existing data copy

2017-01-11 Thread Peter Eisentraut
On 12/19/16 4:30 AM, Petr Jelinek wrote:
> This patch implements data synchronization for the logical replication.
> It works both for initial setup of subscription as well as for tables
> added later to replication when the subscription is already active.

First detailed read-through.  General structure makes sense.

Comments on some details:

No catalogs.sgml documentation for pg_subscription_rel.  When that is
added, I would emphasize that entries are only added when relations
are first encountered, not immediately when a table is added to a
publication.  So it is unlike pg_publication_rel in that respect.

Rename max_subscription_sync_workers ->
max_sync_workers_per_subscription (similar to
max_parallel_workers_per_gather and others)

About the changes to COPY: It took me a while to get clarity on the
direction of things.  Is the read_cb reading the table, or the data?
You are copying data produced by a function into a table, so
produce_cb or data_source_cb could be clearer.

SetSubscriptionRelState(): This is doing an "upsert", so I don't think
a RowExclusiveLock is enough, or it needs to be able to retry on
concurrent changes.  I suppose there shouldn't be more than one
concurrent change per sub/rel pair, but that would need to be
explained there.

SetSubscriptionRelState(): The memset(values, 0, sizeof(values)) is
kind of in an odd place.  Possibly not actually needed.

GetSubscriptionRelState(): Prefer error messages that identify the
objects by name.  (Also subid should be %u.)

GetSubscriptionRelationsFilter(): The state and filter arguments are
kind of weird.  And there are only two callers, so all this complexity
is not even used.  Perhaps, if state == SUBREL_STATE_UNKNOWN, then
return everything, else return exactly the state specified.  The case
of everything-but-the-state-specified does not appear to be needed.

GetSubscriptionRelationsFilter(): RowExclusiveLock is probably too
much

This patch adds the fourth definition of oid_cmp() (also known as
oidComparator()) and the 12th definition of atooid().  Let's collect
those in a central place.  I'm sending a separate patch for that.  (No
need to change here until that is resolved, of course.)

AlterSubscription_refresh(): Put the if (wrconn == NULL) case first
and error out, and then put the rest of the code into the main
function block, saving one level of indentation.

AlterSubscription_refresh(): remote_table_oids isn't really the
remote OIDs of the tables but the local OIDs of the remote tables.
Consider clearer variable naming in that function.

interesting_relation(): very generic name, maybe
should_apply_changes_for_rel().  Also the comment mentions a "parallel
worker" -- maybe better a "separate worker process running in
parallel", since a parallel worker is something different.

LogicalRepApplyLoop() changed to non-static, but not used anywhere
else.

process_syncing_tables_*(): Function names and associated comments are
not very clear (process what?, handle what?).

In process_syncing_tables_apply(), it says that
logicalrep_worker_count() counts the apply worker as well, but what
happens if the apply worker has temporarily disappeared?  Since
logicalrep_worker_count() is only used in this one place, maybe have
it count just the sync workers and rename it slightly.

Changed some LOG messages to DEBUG, not clear what the purpose there is.

check_max_subscription_sync_workers(): Same problem as discussed
previously, checking a GUC setting against another GUC setting like
this doesn't work.  Just skip it and check at run time.

wait_for_sync_status_change(): Comment is wrong/misleading: It doesn't
wait until it matches any specific state, it just waits for any state
change.

LogicalRepSyncTableStart(): The code that assembles the slot name
needs to be aware of NAMEDATALEN.  (Maybe at least throw in a static
assert that NAMEDATALEN>=64.)


-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Floating point comparison inconsistencies of the geometric types

2017-01-11 Thread Emre Hasegeli
> - Floating point comparisons for gemetric types
>
>   Comparison related to geometric types is performed by FPeq
>   macro and similars defined in geo_decls.h. This intends to give
>   tolerance to the comparisons.
>
>  A
>   FPeq: |<=e-|-e=>|(<= means inclusive, e = epsilon = tolerance)
>   FPne:   ->|  e | e  |<-  (<- means exclusive)
>   FPlt:  | e  |<-
>   FPle: |<=e |
>   FPgt:   ->|  e |
>   FPge:  | e=>|
>
>   These seems reasonable ignoring the tolerance amount issue.

I tried to explain below why I don't find this method reasonable.

> At least for the point type, (bitmap) index scan is consistent
> with sequential scan.  I remember that the issue was
> "inconsistency between indexed and non-indexed scans over
> geometric types" but they seem consistent with each other.

You can see on the Git history that it was a lot of trouble to make
the indexes consistent with the operators, and this is limiting
improving indexing of the geometric types.

> You mentioned b-tree, which don't have predefined opclass for
> geometric types. So the "inconsistency" may be mentioning the
> difference between geometric values and combinations of plain
> (first-class) values. But the two are different things and
> apparently using different operators (~= and = for equality) so
> the issue is not fit for this. More specifically, "p ~= p0" and
> "x = x0 and y = y0" are completely different.

Yes, the default btree operator class doesn't have to implement
standard equality and basic comparison operator symbols.  We can
implement it with different symbols.

> Could you let me (or any other guys on this ml) have more precise
> description on the problem and/or what you want to do with this
> patch?

I will try to itemize the current problems with the epsilon method:

= Operator Inconsistencies

My current patches address all of them.

- Only some operators are using the epsilon.

I included the list of the ones which doesn't use the epsilon on
initial post of this thread.  This makes the operators not compatible
with each other.  For example, you cannot say "if box_a @> box_b and
box_b @> point then box_a @> box_b".

- The application of epsilon is implementation dependent.

Epsilon works between two numbers.  There is not always a single way
of implementing geometric operators.  This is surprising to the users.
For example, you cannot say "if box_a @> box_b then box_a <-> box_b <=
epsilon".

This is also creating a high barrier on developing those operators.
Fixing bugs and performance regressions are very likely to change the
results.

- Some operators are just wrong:

Line operators are not well maintained.  The following are giving
wrong results when neither A or B of lines are not exactly 1:

* line # line
* line <-> line
* point ## line
* point ## lseg

They are broken independently from the epsilon, though it is not easy
to fix them without changing the results of the cases that are
currently working.

- Some operators are violating commutative property.

For example, you cannot say "if line_a ?|| line_b then line_b ?|| line_a".

- Some operators are violating transitive property.

For example, you cannot say "if point_a ~= point_b and point_b ~=
point_c then point_a ~= point_c".

- The operators with epsilon are not compatible with float operators.

This is also surprising for the users.  For example, you cannot say
"if point_a << point_b then point_a[0] < point_b[0]".

- The operators are not checking for underflow or overflow.
- NaN values are not treated same as the float datatype.

Those are independent problems, though checking them with epsilon
would create additional set of inconsistencies.  Epsilon creates
especially more problems around 0.

= Missing Bits on the Operator Classes

I want to address those in the future, if my patches are accepted.

- There is no GiST or SP-GiST support for cross datatype operators
other than containment.

We can develop nice cross-datatype operator families to support more
operators.  It would have been easier, if we could rely on some
operators to implement others.  Geometric types serve the purpose of
demonstrating our indexing capabilities.  We should make them good
examples, not full of special cases with hidden bugs.

- There is no BRIN support for types other than the box.

BRIN inclusion operator class is datatype agnostic.  It uses some
operators to implement others which doesn't work for anything more the
box vs box operators, because of the inconsistencies.

- GROUP BY and DISTINCT are not working.
- ORDER BY is not working.

There are regular user complaints about this on the mailing list.  We
can easily provide default hash and btree operator classes that would
fix the problem, if we haven't had the epsilon.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WARM and indirect indexes

2017-01-11 Thread Bruce Momjian
On Wed, Jan 11, 2017 at 08:28:28AM -0500, Andrew Dunstan wrote:
> 
> 
> On 01/10/2017 09:25 PM, Bruce Momjian wrote:
> >I am not saying we shouldn't do it, but I am afraid that the complexity
> >in figuring out when to use indirect indexes, combined with the number
> >of users who will try them, really hurts its inclusion.
> >
> 
> 
> I think you're making this out to be far more complex than it really is. You
> could argue the same about a great many features. Both of these features
> have upsides and downsides.

Right, and we do make such arguments --- what is your point?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WARM and indirect indexes

2017-01-11 Thread Bruce Momjian
On Wed, Jan 11, 2017 at 12:36:24PM +0530, Pavan Deolasee wrote:
> That could also be seen as an advantage to indirect indexes. While I haven't
> seen the code, I believe indirect index code will only be hit if someone
> actually uses them. So there won't be any overhead for other users who do not
> wish to use the feature. WARM on the other hand will be "always on" feature,
> even for system tables. That clearly has advantages, both from usability
> perspective as well as the fact that the code will be heavily tested. But if
> there are cases which get adversely affected by WARM, they will have to pay 
> the
> price for larger benefit.
> 
> To me, a better strategy is probably to focus on one of the patches, get that
> in and then evaluate the second patch, both from complexity as well as
> performance given that the first patch may have narrowed the gaps.

Yes, that is exactly what I suggested in a post I just sent to this
thread.  With WARM always-on, it seems like the best thing to do first
because everyone will use it silently, and we can then decide if a user
controlled feature is warranted, and under what circumstances we should
recommend the user of the feature.

However, I am concerned that doing the two features serially (not in
parallel) might mean that the second feature doesn't make it into
Postgres 10, but considering we will live with this feature probably
forever, I think it is the best course.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WARM and indirect indexes

2017-01-11 Thread Alvaro Herrera
Bruce Momjian wrote:

> Therefore, I think we need WARM done first, then we can test indirect
> indexes to see if they are a sufficient win to add it for the small
> percentage of users who will use it.

Agreed -- that's my plan.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WARM and indirect indexes

2017-01-11 Thread Bruce Momjian
On Tue, Jan 10, 2017 at 09:25:05PM -0500, Bruce Momjian wrote:
> Thank you for the summary.  I think we have to consider two things with
> indirect indexes:
> 
> 1.  What percentage speedup is the _average_ user going to get?  You
> have to consider people who will use indirect indexes who get no benefit
> or a net slowdown, and users who will get a benefit.
> 
> 2.  What percentage of users are going to use indirect indexes?
> 
> So, for #1 you might have users who are getting +1%, +50%, and -20%, so
> maybe +10% average, and for #2 you might have 0.1%.  When you multiply
> them out, you get 0.01% average improvement per installation, which is
> very small.  Obviously, these are just wild guesses, but this is just to
> make a point.  If you assume WARM has been optimized, #1 gets even
> lower.

Sorry to have to reply to my own email but I need some of the text
above.

Basically, with WARM, the adoption rate (#2) is 100%.

I am asking what instructions we will give users for #1 to prevent
people from using indirect indexes and getting worse performance.  Are
we going to say, "Use indirect indexes on columns that are updated
frequently?"  Actually, that seems pretty clear and would be easy for
users to follow.

I think the big question is that we will not know the benefits of
indirect indexes over WARM until we have implemented WARM, and if the
benefits of indirect indexes over WARM are small, and considering #2, we
might decide that it isn't worth adding it, for the reasons I already
outlined.

Therefore, I think we need WARM done first, then we can test indirect
indexes to see if they are a sufficient win to add it for the small
percentage of users who will use it.

In general, Postgres doesn't support ever possible performance tuning
option, and I think we are better for that because Postgres is simpler
to use.  Going back to my blog post, if you add a feature, every user
who is considering tuning Postgres has to understand the feature and
decide if they should use it, so even for people who don't user the
feature, there is a cost, however small.

In summary, I love WARM, and might love indirect indexes too, but I need
to feel that indirect indexes are a clear win for the added complexity,
both in our code, and for the user API.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PoC: Make it possible to disallow WHERE-less UPDATE and DELETE

2017-01-11 Thread Pavel Stehule
Hi

2017-01-10 17:31 GMT+01:00 David Fetter :

> On Mon, Jan 09, 2017 at 07:52:11PM -0300, Alvaro Herrera wrote:
> > David Fetter wrote:
> >
> > > +   if (query->commandType == CMD_UPDATE || query->commandType ==
> CMD_DELETE)
> > > +   {
> > > +   /* Make sure there's something to look at. */
> > > +   Assert(query->jointree != NULL);
> > > +   if (query->jointree->quals == NULL)
> > > +   ereport(ERROR,
> > > +   (errcode(ERRCODE_SYNTAX_ERROR),
> > > +errmsg("%s requires a WHERE
> clause when the require_where hook is enabled.",
> > > +query->commandType ==
> CMD_UPDATE ? "UPDATE" : "DELETE"),
> > > +errhint("To %s all rows, use
> \"WHERE true\" or similar.",
> > > +query->commandType ==
> CMD_UPDATE ? "update" : "delete")));
> > > +   }
> >
> > Per my earlier comment, I think this should use
> > ERRCODE_S_R_E_PROHIBITED_SQL_STATEMENT_ATTEMPTED instead.
>
> Fixed.
>
> > I think this should say "the \"require_hook\" extension" rather than
> > use the term "hook".
>
> Fixed.
>
> > (There are two or three translatability rules violations in this
> > snippet,
>
> Based on the hints in the docs docs around translation, I've
> refactored this a bit.
>

Final review:

1. there are not any problem with patching, compilation.
2. all regress tests passed
3. the documentation and regress tests are good enough
4. the code respects postgres formatting rules

I'll mark this patch as ready for commiter

Regards

Pavel


>
> > but since this is an extension and those are not translatable, I
> > won't say elaborate further.)
>
> "Not translatable," or "not currently translated?"
>
> Best,
> David.
> --
> David Fetter  http://fetter.org/
> Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
> Skype: davidfetter  XMPP: david(dot)fetter(at)gmail(dot)com
>
> Remember to vote!
> Consider donating to Postgres: http://www.postgresql.org/about/donate
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


Re: [HACKERS] pg_restore accepts -j -1

2017-01-11 Thread Stephen Frost
Ashutosh,

* Ashutosh Bapat (ashutosh.ba...@enterprisedb.com) wrote:
> On Tue, Jan 10, 2017 at 10:18 AM, Stephen Frost  wrote:
> > For reasons which seem likely to be entirely unintentional, pg_restore
> > will accept a '-1' for -j:
> >
> > pg_restore -j -1
> >
> > This seems to result in the parallel state being NULL and so things
> > don't outright break, but it hardly seems likely to be what the user was
> > asking for- my guess is that they actually wanted "parallel, single
> > transaction", which we don't actually support:
> >
> > -> pg_restore -j 2 -1
> > pg_restore: cannot specify both --single-transaction and multiple jobs
> >
> > We also don't accept -1 for pg_dump:
> >
> > -> pg_dump -j -1
> > pg_dump: invalid number of parallel jobs
> >
> > If I'm missing something, please let me know, otherwise I'll plan to put
> > the same check into pg_restore which exists in pg_dump.
> 
> Both the code blocks were added by 9e257a18, but I don't see any
> description of why they are different in pg_dump.c and pg_restore.c.

Right.

> In fact per comments in pg_restore.c, that condition should be same as
> pg_dump.c. I am not sure whether it's just for windows specific
> condition or the whole block. But I don't see any reason not to
> replicate the same conditions in pg_restore.c

I'm pretty sure that comment is about the Windows-specific check of
MAXIMUM_WAIT_OBJECTS, but I don't think there's any reason to accept a
zero or negative value for numWorkers.

Attached patch adds the same check to pg_restore that's in pg_dump
already.  Looks like it should back-patch to 9.3 pretty cleanly and I'll
add a similar check for 9.2.

Any thoughts about adding the Windows-specific MAXIMUM_WAIT_OBJECTS
check to 9.2 pg_restore..?  It certainly looks entirely straight-forward
to do, and though I don't recall hearing anyone complaining about trying
to run pg_restore on Windows with lots of jobs and having it fall over,
it might avoid a bit of frustration for anyone who does try.

Thanks!

Stephen
diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
new file mode 100644
index b21fd26..b69082d
*** a/src/bin/pg_dump/pg_restore.c
--- b/src/bin/pg_dump/pg_restore.c
*** main(int argc, char **argv)
*** 402,407 
--- 402,413 
  	if (opts->tocFile)
  		SortTocFromFile(AH);
  
+ 	if (numWorkers <= 0)
+ 	{
+ 		fprintf(stderr, _("%s: invalid number of parallel jobs\n"), progname);
+ 		exit(1);
+ 	}
+ 
  	/* See comments in pg_dump.c */
  #ifdef WIN32
  	if (numWorkers > MAXIMUM_WAIT_OBJECTS)


signature.asc
Description: Digital signature


Re: [HACKERS] merging some features from plpgsql2 project

2017-01-11 Thread Merlin Moncure
On Tue, Jan 10, 2017 at 7:44 AM, Marko Tiikkaja  wrote:
> On Tue, Jan 10, 2017 at 2:26 PM, Peter Eisentraut
>  wrote:
>>
>> It's not like PL/pgSQL is the king of brevity.
>
>
> This is essentially saying "PL/PgSQL isn't perfect, so we shouldn't try and
> make it better".  I hear this argument a lot, and as long as people keep
> rejecting improvements for this reason they can keep saying it.  It's a
> self-fulfilling prophecy.

Agreed.  But adding language features, especially syntactical ones,
demands prudence; there is good reason to limit keywords like that.
What about:
pgsql.rows
pgsql.found
pgsql.sqlerrm
etc
as automatic variables (I think this was suggested upthread).
Conflicts with existing structures is of course an issue but I bet it
could be worked out.

I also kinda disagree on the brevity point, or at least would like to
add some color.  SQL is verbose in the sense of "let's make everything
an english language sentence" but incredibly terse relative to other
language implementations of the same task.   Embedded SQL tends to be
uniformly clumsy due to all of the extra handling of errrors,
parameterization, etc.  This is why we write plpgsql naturally.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >