Re: [HACKERS] Patent warning about the Greenplum source code

2015-10-30 Thread Bruce Momjian
On Fri, Oct 30, 2015 at 09:56:48AM +0100, Andres Freund wrote:
> Hi,
> 
> I don't really want to discuss patent issues publically.

While we don't want to discuss patented ideas, the patent terms are an
imporant topic here.

> On 2015-10-30 04:47:35 -0400, Bruce Momjian wrote:
> > However, while the license defines and uses "Derivative Works", it does
> > not mention that in the patent grant clause.  I assume this means that
> > patent grants do not apply to derived works, meaning if code or ideas
> > were moved from Greenplum to Postgres (which is not Apache 2.0
> > licensed), it would not have a patent grant. I talked to Greenplum staff
> > about this a few months ago and they did not dispute my analysis.
> 
> The easiest thing would be to dual-licensce the code such contributions
> to postgres. That sounds quite possible to me.

Yes, but once they get contributions from outside, that is much harder to
add.

> > Therefore, I caution people from viewing the Greenplum source code as
> > you might see patented ideas that could be later implemented in
> > Postgres, opening Postgres up to increased patent violation problems.  I
> > am also concerned about existing community members who work for
> > Pivotal/Greenplum and therefore are required to view the patented source
> > code.
> 
> Issues around this are much larger than patents. Any contribution done
> under employment has such risks. That's why the kernel has the
> signed-off-policy.
> 
> Check the section about signed-off-by in
> https://www.kernel.org/doc/Documentation/SubmittingPatches
> and simpler
> https://ltsi.linuxfoundation.org/developers/signed-process

Yes, this does expose a missing part of our existing process.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump

2015-10-30 Thread Дмитрий Воронин

> More specifically, I am not quite understanding the distinction
> between "all cluster" and "global objects."

all cluster is roles, tablespaces, databases with it's content.
global objects is roles, tablespaces.

> What do you have in mind on the implementation side? Do you think
> pg_dump is a suitable baseline, or were you thinking of something
> different, and if so, what?

I think, the baseline is pg_dump. So, pg_dump create a dump of database and 
it's content. pg_dump must backup comments, security labels (if exists)
in some portable format (see my messages earlier). In our solution we now use 
proposed way for backup and restore COMMENTs and SECURITY LABELs on DATABASE).

If my solution is good, I am ready to cooperate with rethinking and rewriting 
(if needed) mechanism of dumping in PostgreSQL.

P.S. I already think so, that we needed in rethinking idea of dumping and 
restore objects if PostgreSQL.

Thank you.

-- 
Best regards, Dmitry Voronin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Patent warning about the Greenplum source code

2015-10-30 Thread Bruce Momjian
Some of you might have seen that the Greenplum database source code has
been published:

https://adtmag.com/articles/2015/10/28/greenplum-open-sourced.aspx

under the Apache 2.0 license:

http://www.apache.org/licenses/LICENSE-2.0

The source code has known patents owned by Pivotal/Greenplum.  The
license has a patent grant clause:

3. Grant of Patent License. Subject to the terms and conditions of this
License, each Contributor hereby grants to You a perpetual, worldwide,
non-exclusive, no-charge, royalty-free, irrevocable (except as stated in
this section) patent license to make, have made, use, offer to sell,
sell, import, and otherwise transfer the Work, where such license
applies only to those patent claims licensable by such Contributor that
are necessarily infringed by their Contribution(s) alone or by
combination of their Contribution(s) with the Work to which such
Contribution(s) was submitted. If You institute patent litigation
against any entity (including a cross-claim or counterclaim in a
lawsuit) alleging that the Work or a Contribution incorporated within
the Work constitutes direct or contributory patent infringement, then
any patent licenses granted to You under this License for that Work
shall terminate as of the date such litigation is filed.

However, while the license defines and uses "Derivative Works", it does
not mention that in the patent grant clause.  I assume this means that
patent grants do not apply to derived works, meaning if code or ideas
were moved from Greenplum to Postgres (which is not Apache 2.0
licensed), it would not have a patent grant. I talked to Greenplum staff
about this a few months ago and they did not dispute my analysis.

Therefore, I caution people from viewing the Greenplum source code as
you might see patented ideas that could be later implemented in
Postgres, opening Postgres up to increased patent violation problems.  I
am also concerned about existing community members who work for
Pivotal/Greenplum and therefore are required to view the patented source
code.  The license issue might eventually be improved by
Pivotal/Greenplum, but, for now, I think caution is necessary.

Of course, never mention known-patented ideas in any community forum,
including this email list.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patent warning about the Greenplum source code

2015-10-30 Thread Andres Freund
Hi,

I don't really want to discuss patent issues publically.

On 2015-10-30 04:47:35 -0400, Bruce Momjian wrote:
> However, while the license defines and uses "Derivative Works", it does
> not mention that in the patent grant clause.  I assume this means that
> patent grants do not apply to derived works, meaning if code or ideas
> were moved from Greenplum to Postgres (which is not Apache 2.0
> licensed), it would not have a patent grant. I talked to Greenplum staff
> about this a few months ago and they did not dispute my analysis.

The easiest thing would be to dual-licensce the code such contributions
to postgres. That sounds quite possible to me.

> Therefore, I caution people from viewing the Greenplum source code as
> you might see patented ideas that could be later implemented in
> Postgres, opening Postgres up to increased patent violation problems.  I
> am also concerned about existing community members who work for
> Pivotal/Greenplum and therefore are required to view the patented source
> code.

Issues around this are much larger than patents. Any contribution done
under employment has such risks. That's why the kernel has the
signed-off-policy.

Check the section about signed-off-by in
https://www.kernel.org/doc/Documentation/SubmittingPatches
and simpler
https://ltsi.linuxfoundation.org/developers/signed-process


Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dangling Client Backend Process

2015-10-30 Thread Robert Haas
On Tue, Oct 27, 2015 at 6:29 AM, Rajeev rastogi
 wrote:
> On 23 October 2015 01:58, Robert Haas [mailto:robertmh...@gmail.com] Wrote:
>>Well, I'm not buying this extra PostmasterIsAlive() call on every pass
>>through the main loop.  That seems more expensive than we can really
>>justify. Checking this when we're already calling WaitLatchOrSocket is
>>basically free, but the other part is not.
>
> Agree.
>
>>Here's a version with that removed and some changes to the comments.
>
> Thanks for changing.
>
>>I still don't think this is quite working right, though, because here's
>>what happened when I killed the postmaster:
>>
>>rhaas=# select 1;
>> ?column?
>>--
>>1
>>(1 row)
>>
>>rhaas=# \watch
>>Watch every 2sThu Oct 22 16:24:10 2015
>>
>> ?column?
>>--
>>1
>>(1 row)
>>
>>Watch every 2sThu Oct 22 16:24:12 2015
>>
>> ?column?
>>--
>>1
>>(1 row)
>>
>>Watch every 2sThu Oct 22 16:24:14 2015
>>
>> ?column?
>>--
>>1
>>(1 row)
>>
>>Watch every 2sThu Oct 22 16:24:16 2015
>>
>> ?column?
>>--
>>1
>>(1 row)
>>
>>server closed the connection unexpectedly
>>This probably means the server terminated abnormally
>>before or while processing the request.
>>The connection to the server was lost. Attempting reset: Failed.
>>
>>Note that the error message doesn't actually show up on the client (it
>>did show up in the log).  I guess that may be inevitable if we're
>>blocked in secure_write(), but if we're in secure_read() maybe it should
>>work?  I haven't investigated why it doesn't.
>
> Actually in this case client is not waiting for the response from the server 
> rather it is waiting for new command from user.
> So even though server has sent the response to client, client is not able to 
> receive.
> Once client receives the next command to execute, by the time connection has 
> terminated from server side and hence  it comes out with the above message 
> (i.e. server closed the connection...)
>
> Though I am also in favor of  providing some error message to client. But 
> with the current infrastructure, I don’t think there is any way to pass this 
> error message to client [This error has happened without involvement of the 
> client side].
>
> Comments?

Hmm.  ProcessInterrupts() signals some FATAL errors while the
connection is idle, and rumor has it that that works: the client
doesn't immediately read the FATAL error, but the next time it sends a
query, it tries to read from the connection and sees the FATAL error
at that time.  I wonder why that's not working here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cross-check recent documentation changes

2015-10-30 Thread Robins
On 29 October 2015 at 15:35, Amit Langote 
wrote:

> errmsg_plural() function determines whether to output the singular version
> or the plural
>

​Duh. Thanks Amit!
Should have noticed the function-name change.​

--
Robins Tharakan


Re: [HACKERS] pg_dump LOCK TABLE ONLY question

2015-10-30 Thread Filip Rembiałkowski
Please take it as a very naive and basic approach :-)

What could go wrong here?





diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 36863df..57a50b5 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -5169,9 +5169,9 @@ getTables(Archive *fout, DumpOptions *dopt, int
*numTables)
 * Read-lock target tables to make sure they aren't DROPPED or altered
 * in schema before we get around to dumping them.
 *
-* Note that we don't explicitly lock parents of the target tables; we
-* assume our lock on the child is enough to prevent schema
-* alterations to parent tables.
+* Note that we don't explicitly lock neither parents nor children of
+* the target tables; we assume our lock on the child is enough to
+* prevent schema alterations to parent tables.
 *
 * NOTE: it'd be kinda nice to lock other relations too, not only
 * plain tables, but the backend doesn't presently allow that.
@@ -5179,11 +5179,18 @@ getTables(Archive *fout, DumpOptions *dopt,
int *numTables)
if (tblinfo[i].dobj.dump && tblinfo[i].relkind == RELKIND_RELATION)
{
resetPQExpBuffer(query);
-   appendPQExpBuffer(query,
- "LOCK TABLE %s IN ACCESS SHARE MODE",
- fmtQualifiedId(fout->remoteVersion,
-   tblinfo[i].dobj.namespace->dobj.name,
-tblinfo[i].dobj.name));
+   if (fout->remoteVersion >= 80400)
+   appendPQExpBuffer(query,
+   "LOCK TABLE ONLY %s IN ACCESS SHARE MODE",
+   fmtQualifiedId(fout->remoteVersion,
+
tblinfo[i].dobj.namespace->dobj.name,
+   tblinfo[i].dobj.name));
+   else
+   appendPQExpBuffer(query,
+   "LOCK TABLE %s IN ACCESS SHARE MODE",
+   fmtQualifiedId(fout->remoteVersion,
+
tblinfo[i].dobj.namespace->dobj.name,
+   tblinfo[i].dobj.name));
ExecuteSqlStatement(fout, query->data);
}





On Fri, Oct 16, 2015 at 5:06 PM, Robert Haas  wrote:
> On Thu, Oct 15, 2015 at 9:13 PM, Jim Nasby  wrote:
>> OTOH, now that the catalog is MVCC capable, do we even still need to lock
>> the objects for a schema-only dump?
>
> Yes.  The MVCC snapshots used for catalog reads are stable only for
> the duration of one particular catalog read.  We're not using the
> transaction snapshot.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] September 2015 Commitfest

2015-10-30 Thread Michael Paquier
On Fri, Oct 30, 2015 at 2:02 PM, Robert Haas wrote:
> On Fri, Oct 30, 2015 at 10:47 AM, Michael Paquier wrote:
>> On Thu, Oct 22, 2015 at 9:21 AM, Andres Freund wrote:
 Among the five patches marked as ready for committer, one is a bug fix
 that should be back-patched (ahem). Shouldn't we move on with those
 entries first?
>>>
>>> I think at this point we essentially can just move all entries to the
>>> next. Will do that, and note down which patches haven't gotten any real
>>> review.
>>
>> We are close to the end of the month. Should I move on to do the
>> vacuuming or are you planning to do it? At this stage, to be fair with
>> people whose patches are in "waiting on author" state and because
>> there is not much time until the next CF begins, I propose to remove
>> all the remaining 43 entries with the same status as currently listed:
>> Needs review: 26. Waiting on Author: 11. Ready for Committer: 6.

So, seeing nothing happening I have done the above, opened 2015-11 CF
and closed the current one.

> Gosh, that's a lot of stuff that didn't get reviewed.  :-(

Yep.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] September 2015 Commitfest

2015-10-30 Thread Andres Freund
On 2015-10-31 00:42:54 +0100, Michael Paquier wrote:
> On Fri, Oct 30, 2015 at 2:02 PM, Robert Haas wrote:
> > On Fri, Oct 30, 2015 at 10:47 AM, Michael Paquier wrote:
> >> On Thu, Oct 22, 2015 at 9:21 AM, Andres Freund wrote:
>  Among the five patches marked as ready for committer, one is a bug fix
>  that should be back-patched (ahem). Shouldn't we move on with those
>  entries first?
> >>>
> >>> I think at this point we essentially can just move all entries to the
> >>> next. Will do that, and note down which patches haven't gotten any real
> >>> review.
> >>
> >> We are close to the end of the month. Should I move on to do the
> >> vacuuming or are you planning to do it? At this stage, to be fair with
> >> people whose patches are in "waiting on author" state and because
> >> there is not much time until the next CF begins, I propose to remove
> >> all the remaining 43 entries with the same status as currently listed:
> >> Needs review: 26. Waiting on Author: 11. Ready for Committer: 6.
> 
> So, seeing nothing happening I have done the above, opened 2015-11 CF
> and closed the current one.

You seemingly moved all entries, even the ones which were
waiting-on-author for a long while, over? I think we should return items
on there with lot of prejudice. Otherwise we're never going to get
anywhere.

> > Gosh, that's a lot of stuff that didn't get reviewed.  :-(
> 
> Yep.

Yea, this is probably one of the worst commitfests ever from the point
of reviewer participation.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] onlyvalue aggregate (was: First Aggregate Funtion?)

2015-10-30 Thread Robert Haas
On Wed, Oct 28, 2015 at 5:50 PM, Marko Tiikkaja  wrote:
> Here's a patch for the aggregate function outlined by Corey Huinker in
> CADkLM=foA_oC_Ri23F9PbfLnfwXFbC3Lt8bBzRu3=cb77g9...@mail.gmail.com .  I
> called it "onlyvalue", which is a horrible name, but I have nothing better
> to offer.  (Corey called it "only", but that doesn't really work since ONLY
> is a fully reserved keyword.)

I've written an aggregate that does something like this a few times.
I think one time I called it "the", which is probably too clever, but
then you could query for the(project_manager) and similar.  I've
usually written it to not error-check and just return the first
non-NULL value it runs across, which suggests a name like any_old() or
whatever().

I actually think by comparison with those ideas, onlyvalue() - or
maybe only_value() - is not bad.

> I'll add this to September's commit fest,

November, probably.

> but if you want to bash me or the
> patch in the meanwhile, go ahead.

What if we want to say nice things?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting sorted data from foreign server

2015-10-30 Thread Ashutosh Bapat
If there is a collate clause in the ORDER BY, the server crashes with
assertion
+Assert(loc_cxt.state == FDW_COLLATE_NONE ||
+loc_cxt.state == FDW_COLLATE_SAFE);


The assertion is fine as long as is_foreign_expr() tests only boolean
expressions (appearing in quals). This patch uses the function to test an
expression appearing in ORDER BY clause, which need not be boolean.
Attached patch removed the assertion and instead makes the function return
false, when the walker deems collation of the expression unsafe. The walker
can not return false when it encounter unsafe expression since the subtree
it's examining might be part of an expression which does not use the
collation ultimately.

On Wed, Oct 28, 2015 at 11:51 AM, Ashutosh Bapat <
ashutosh.ba...@enterprisedb.com> wrote:

> On Tue, Oct 27, 2015 at 6:44 PM, Fabrízio de Royes Mello <
> fabriziome...@gmail.com> wrote:
>
>>
>>
>> On Tue, Oct 27, 2015 at 5:26 AM, Ashutosh Bapat <
>> ashutosh.ba...@enterprisedb.com> wrote:
>> >
>> >
>> >
>> > On Fri, Oct 23, 2015 at 2:43 AM, Robert Haas 
>> wrote:
>> >>
>> >> On Wed, Oct 21, 2015 at 5:23 AM, Ashutosh Bapat
>> >>  wrote:
>> >> > Increasing the sorting cost factor (when use_remote_estimates =
>> false) from
>> >> > 1.1 to 1.2 makes the difference disappear.
>> >> >
>> >> > Since the startup costs for postgres_fdw are large portion of total
>> cost,
>> >> > extra 10% of rest of the cost is comparable to 1% fuzzy limit. IMO,
>> we
>> >> > shouldn't bother too much about it as the path costs are not much
>> different.
>> >>
>> >> My feeling is that cranking the sorting cost factor up to 20-25% would
>> >> be a good idea, just so we have less unnecessary plan churn.  I dunno
>> >> if sorting always costs that much, but if a 10% cost overhead is
>> >> really 1% because it only applies to a fraction of the cost, I don't
>> >> think that's good.  The whole point was to pick something large enough
>> >> that we wouldn't take the sorted path unless we will benefit from the
>> >> sort, and clearly that's not what happened here.
>> >>
>> >
>> > PFA patch with the default multiplication factor for sort bumped up to
>> 1.2.
>> >
>>
>> +/* If no remote estimates, assume a sort costs 10% extra */
>> +#define DEFAULT_FDW_SORT_MULTIPLIER 1.2
>>
>> The above comment should not be 20%?
>>
>> Ah! Here's patch with comment fixed.
>
> --
> Best Wishes,
> Ashutosh Bapat
> EnterpriseDB Corporation
> The Postgres Database Company
>



-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company
diff --git a/contrib/postgres_fdw/deparse.c b/contrib/postgres_fdw/deparse.c
index 697de60..3cb728f 100644
--- a/contrib/postgres_fdw/deparse.c
+++ b/contrib/postgres_fdw/deparse.c
@@ -186,23 +186,26 @@ is_foreign_expr(PlannerInfo *root,
 	 * Check that the expression consists of nodes that are safe to execute
 	 * remotely.
 	 */
 	glob_cxt.root = root;
 	glob_cxt.foreignrel = baserel;
 	loc_cxt.collation = InvalidOid;
 	loc_cxt.state = FDW_COLLATE_NONE;
 	if (!foreign_expr_walker((Node *) expr, _cxt, _cxt))
 		return false;
 
-	/* Expressions examined here should be boolean, ie noncollatable */
-	Assert(loc_cxt.collation == InvalidOid);
-	Assert(loc_cxt.state == FDW_COLLATE_NONE);
+	/*
+	 * If the expression has a valid collation that does not arise from a
+	 * foreign var, the expression can not be sent over.
+	 */
+	if (loc_cxt.state == FDW_COLLATE_UNSAFE)
+		return false;
 
 	/*
 	 * An expression which includes any mutable functions can't be sent over
 	 * because its result is not stable.  For example, sending now() remote
 	 * side could cause confusion from clock offsets.  Future versions might
 	 * be able to make this choice with more granularity.  (We check this last
 	 * because it requires a lot of expensive catalog lookups.)
 	 */
 	if (contain_mutable_functions((Node *) expr))
 		return false;
@@ -1870,10 +1873,57 @@ printRemoteParam(int paramindex, Oid paramtype, int32 paramtypmod,
  */
 static void
 printRemotePlaceholder(Oid paramtype, int32 paramtypmod,
 	   deparse_expr_cxt *context)
 {
 	StringInfo	buf = context->buf;
 	char	   *ptypename = format_type_with_typemod(paramtype, paramtypmod);
 
 	appendStringInfo(buf, "((SELECT null::%s)::%s)", ptypename, ptypename);
 }
+
+/*
+ * Deparse ORDER BY clause according to the given pathkeys for given base
+ * relation. From given pathkeys expressions belonging entirely to the given
+ * base relation are obtained and deparsed.
+ */
+void
+appendOrderByClause(StringInfo buf, PlannerInfo *root, RelOptInfo *baserel,
+	List *pathkeys)
+{
+	ListCell			*lcell;
+	deparse_expr_cxt	context;
+	int	nestlevel;
+	char*delim = " ";
+
+	/* Set up context struct for recursion */
+	context.root = root;
+	context.foreignrel = baserel;
+	context.buf = buf;
+	context.params_list = NULL;
+
+	/* Make sure any constants in the exprs are printed portably */
+	nestlevel = 

Re: [HACKERS] plpgsql - DECLARE - cannot to use %TYPE or %ROWTYPE for composite types

2015-10-30 Thread Pavel Stehule
2015-10-19 9:52 GMT+02:00 Pavel Stehule :

> Hi,
>
> We cannot to declare variable with referenced type on other composite
> variable. This limit is probably artificial, because any composite type is
> any type too in PostgreSQL.
>
> The issue:
>
> referencing on composite variables doesn't work
>
> do $$ declare x int; y x%type; begin end; $$; -- ok
> do $$ declare x pg_class; y x%type; begin end; $$; -- invalid type name
> "x%type"
> do $$ declare x pg_class; y x%rowtype; begin end; $$; -- relation "x" does
> not exist
>
> The %ROWTYPE needs a record in pg_class. Probably we should not to change
> it. The change can bring a compatibility issues. So there are two
> possibilities:
>
> 1. %TYPE can be used for any kind of variables. This behave will be
> consistent with polymorphic parameters - we have "anyelement", and we have
> not "anyrow".
>
> 2. introduce new keyword - %RECTYPE .. it can work, but there will be gap
> between polymorphic parameters.
>
> Comments, notices?
>
>
Hi

I am sending patch that enables to use references to polymorphic parameters
of row types. Another functionality is possibility to get array or element
type of referenced variable. It removes some gaps when polymorphic
parameters are used.

 create type test_composite_type as (x int, y int);
create or replace function test_simple(src anyelement)
returns anyelement as $$
declare dest src%type;
begin
  dest := src;
  return dest;
end;
$$ language plpgsql;
select test_simple(10);
 test_simple
-
  10
(1 row)

select test_simple('hoj'::text);
 test_simple
-
 hoj
(1 row)

select test_simple((10,20)::test_composite_type);
 test_simple
-
 (10,20)
(1 row)

create or replace function test_poly_element(x anyelement)
returns anyarray as $$
declare result x%arraytype;
begin
  result := ARRAY[x];
  raise notice '% %', pg_typeof(result), result;
  return result;
end;
$$ language plpgsql;
select test_poly_element(1);
NOTICE:  integer[] {1}
 test_poly_element
---
 {1}
(1 row)

select test_poly_element('hoj'::text);
NOTICE:  text[] {hoj}
 test_poly_element
---
 {hoj}
(1 row)

select test_poly_element((10,20)::test_composite_type);
NOTICE:  test_composite_type[] {"(10,20)"}
 test_poly_element
---
 {"(10,20)"}
(1 row)

create or replace function test_poly_array(x anyarray)
returns anyelement as $$
declare result x%elementtype;
begin
  result := x[1];
  raise notice '% %', pg_typeof(result), result;
  return result;
end;
$$ language plpgsql;
select test_poly_array(ARRAY[1]);
NOTICE:  integer 1
 test_poly_array
-
   1
(1 row)

select test_poly_array(ARRAY['hoj'::text]);
NOTICE:  text hoj
 test_poly_array
-
 hoj
(1 row)

select test_poly_array(ARRAY[(10,20)::test_composite_type]);
NOTICE:  test_composite_type (10,20)
 test_poly_array
-
 (10,20)
(1 row)

Regards

Pavel



> Regards
>
> Pavel
>
>
>
commit 76d258edf9ef8e9645f47645a18d79f0d4245d41
Author: Pavel Stehule 
Date:   Fri Oct 30 11:48:33 2015 +0100

enhancing referenced types - possibility to get array or element type of referenced variable type.
row variables and row values are supported now too.

diff --git a/src/pl/plpgsql/src/pl_comp.c b/src/pl/plpgsql/src/pl_comp.c
index 1ae4bb7..333d2bc 100644
--- a/src/pl/plpgsql/src/pl_comp.c
+++ b/src/pl/plpgsql/src/pl_comp.c
@@ -1617,6 +1617,62 @@ plpgsql_parse_tripword(char *word1, char *word2, char *word3,
 	return false;
 }
 
+/*
+ * Derive type from ny base type controlled by reftype_mode
+ *
+ */
+static PLpgSQL_type *
+derive_type(PLpgSQL_type *base_type, int reftype_mode)
+{
+	Oid typoid;
+
+	switch (reftype_mode)
+	{
+		case PLPGSQL_REFTYPE_TYPE:
+			return base_type;
+
+		case PLPGSQL_REFTYPE_ELEMENT:
+		{
+			typoid = get_element_type(base_type->typoid);
+			if (!OidIsValid(typoid))
+ereport(ERROR,
+		(errcode(ERRCODE_DATATYPE_MISMATCH),
+		 errmsg("referenced variable should be an array, not type %s",
+format_type_be(base_type->typoid;
+
+			return plpgsql_build_datatype(typoid, -1,
+			plpgsql_curr_compile->fn_input_collation);
+		}
+
+		case PLPGSQL_REFTYPE_ARRAY:
+		{
+			/*
+			 * Question: can we allow anyelement (array or nonarray) -> array direction.
+			 * if yes, then probably we have to modify enforce_generic_type_consistency,
+			 * parse_coerce.c where still is check on scalar type -> raise error
+			 * ERROR:  42704: could not find array type for data type integer[]
+			 *
+			if (OidIsValid(get_element_type(base_type->typoid)))
+return base_type;
+			*/
+
+			typoid = get_array_type(base_type->typoid);
+			if (!OidIsValid(typoid))
+ereport(ERROR,
+		(errcode(ERRCODE_DATATYPE_MISMATCH),
+		 errmsg("there are not array type for type %s",
+	format_type_be(base_type->typoid;
+
+			return plpgsql_build_datatype(typoid, -1,
+			

[HACKERS] Re: [HACKERS] ExclusiveLock on PostgreSQL - Fabio Mendonça

2015-10-30 Thread Robert Haas
On Wed, Oct 28, 2015 at 5:59 PM, Fabio Oliveira De Mendonca
 wrote:
> I 've a process with 600.000 rows, for insert on table "A" with 130 columns
> and I'm received  the "Exclusivelock"   error message, making lost some
> rows during transaction.  The insert of transaction occurs on each  2 min.
> and for each 1 min, a second process read the table "A" (with Join Table "C"
> using  PK ) to make a insert on a table  ("B") . Well ,  I did think create
> a partitions on table "A",  but I don't believe get a correcting in the
> problem ( "Exclusivelock" ).

This isn't really the right mailing list for this question.

You might find 
https://listas.postgresql.org.br/cgi-bin/mailman/listinfo/pgbr-geral
helpful, or you can ask at
http://www.postgresql.org/list/pgsql-general/

You should also read
https://wiki.postgresql.org/wiki/Guide_to_reporting_problems --
because this report does not contain enough information for someone to
answer your question.  In particular, including the exact text of any
commands you executed and any error or other messages the system
generated would be helpful.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ParallelContexts can get confused about which worker is which

2015-10-30 Thread Robert Haas
While testing last night, I discovered a serious case of brain fade in
parallel.c; the same conceptual mistake has also spread to
nodeGather.c.  parallel.c creates an array of ParallelWorkerInfo
structures, which are defined like this:

typedef struct ParallelWorkerInfo
{
BackgroundWorkerHandle *bgwhandle;
shm_mq_handle *error_mqh;
int32   pid;
} ParallelWorkerInfo;

These structures are typically accessed as pcxt->worker[i].  The
trouble is that pcxt->worker[i].bgwhandle is the bgwhandler for the
i'th worker to be registered, while pcxt->worker[i].error_mqh is the
error_mqh for the i'th worker to attach to the dynamic shared memory
segment.  But those might be totally different processes.

This happens to mostly work, because the postmaster will probably
start the processes in the same order that they are registered, and
the operating system will probably schedule them in the same order the
postmaster starts them.  And if you only have one worker, then you're
fine!  It also seems to work out that if all workers exit cleanly, any
shuffling of the background worker handles relative to the error queue
handles is harmless.  But it's still pretty broken.

There seem to be two ways to fix this.  One is to keep the bgwhandle
objects in a separate array from the error_mqh objects and reconstruct
after the fact what the mapping between those two sets of indexes is.
This solution looks complicated to code and generally pretty annoying
to get right.  The other way to fix this is to pass down the index
that the leader assigns to any given worker, and have the worker use
that index instead of allocating its own separate index after
connecting to the DSM segment.  Unfortunately, there's not really a
good way to pass that additional information down to the worker right
now, but we could fix that pretty easily by adding an additional field
to the BackgroundWorker structure, which the worker would then be able
to access via MyBgworkerEntry.  I suggest something like this:

--- a/src/include/postmaster/bgworker.h
+++ b/src/include/postmaster/bgworker.h
@@ -74,6 +74,7 @@ typedef enum
 #define BGW_DEFAULT_RESTART_INTERVAL   60
 #define BGW_NEVER_RESTART  -1
 #define BGW_MAXLEN 64
+#define BGW_EXTRALEN   128

 typedef struct BackgroundWorker
 {
@@ -85,6 +86,7 @@ typedef struct BackgroundWorker
charbgw_library_name[BGW_MAXLEN];   /* only if
bgw_main is NULL */
charbgw_function_name[BGW_MAXLEN];  /* only if
bgw_main is NULL */
Datum   bgw_main_arg;
+   charbgw_extra[BGW_EXTRALEN];
pid_t   bgw_notify_pid; /* SIGUSR1 this backend on start/stop */
 } BackgroundWorker;

The ability to pass down a little more information from the
registering process to the background worker seems like it would be
useful for more than just parallelism, so I imagine this change might
be generally welcomed.  Parallel workers would use the first four
bytes of bgw_extra to store the worker index, and other users of the
background worker facility could use it for whatever they like.

Comments?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] security_barrier view option type mistake in create view document

2015-10-30 Thread Robert Haas
On Thu, Oct 29, 2015 at 6:02 AM, Haribabu Kommi
 wrote:
> The security_barrier view option is classified as string in the create
> view documentation.
> But it is actually a boolean. The type is mentioned correctly in alter
> view. Here I attached
> the patch with the correction.
>
> -security_barrier 
> (string)
> +security_barrier 
> (boolean)

Committed and back-patched to 9.4, where the error appears to have
been introduced.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replication connection URI?

2015-10-30 Thread Oleksandr Shulgin
Heikki Linnakangas  writes:

> On 11/25/2014 05:11 PM, Heikki Linnakangas wrote:
>> On 11/24/2014 06:05 PM, Alex Shulgin wrote:
>>> Heikki Linnakangas  writes:
>
> It appears that replication connection doesn't support URI but only the
> traditional conninfo string.
>
> src/backend/replication/libpqwalreceiver/libpqwalreceiver.c:99: in 
> libpqrcv_connect():
>
>snprintf(conninfo_repl, sizeof(conninfo_repl),
> "%s dbname=replication replication=true 
> fallback_application_name=walreceiver",
> conninfo);
>
> A patch to fix this welcome?

 Yeah, seems like an oversight. Hopefully you can fix that without
 teaching libpqwalreceiver what connection URIs look like..
>>>
>>> Please see attached.  We're lucky that PQconnectdbParams has an option
>>> to parse and expand the first dbname parameter if it looks like a
>>> connection string (or a URI).
>>>
>>> The first patch is not on topic, I just spotted this missing check.
>>>
>>> The second one is a self-contained fix, but the third one which is the
>>> actual patch depends on the second one, because it specifies the dbname
>>> keyword two times: first to parse the conninfo/URI, then to override any
>>> dbname provided by the user with "replication" pseudo-database name.
>>
>> Hmm. Should we backpatch the second patch? It sure seems like an
>> oversight rather than deliberate that you can't override dbname from the
>> connection string with a later dbname keyword. I'd say "yes".
>>
>> How about the third patch? Probably not; it was an oversight with the
>> connection URI patch that it could not be used in primary_conninfo, but
>> it's not a big deal in practice as you can always use a non-URI
>> connection string instead.
>
> Ok, committed the second patch to all stable branches, and the third 
> patch to master.

It still looks like a bug that primary_conninfo doesn't accept URIs,
even though they were supposed to be handled transparently by all
interfaces using libpq...

Any chance we reconsider and back-patch this up to 9.2?

--
Alex


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] September 2015 Commitfest

2015-10-30 Thread Michael Paquier
On Thu, Oct 22, 2015 at 9:21 AM, Andres Freund wrote:
>> Among the five patches marked as ready for committer, one is a bug fix
>> that should be back-patched (ahem). Shouldn't we move on with those
>> entries first?
>
> I think at this point we essentially can just move all entries to the
> next. Will do that, and note down which patches haven't gotten any real
> review.

We are close to the end of the month. Should I move on to do the
vacuuming or are you planning to do it? At this stage, to be fair with
people whose patches are in "waiting on author" state and because
there is not much time until the next CF begins, I propose to remove
all the remaining 43 entries with the same status as currently listed:
Needs review: 26. Waiting on Author: 11. Ready for Committer: 6.
Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [ADMIN] Replication slots and isolation levels

2015-10-30 Thread Vladimir Borodin

> 30 окт. 2015 г., в 14:30, Robert Haas  написал(а):
> 
> On Thu, Oct 29, 2015 at 3:29 PM, Oleksii Kliukin  wrote:
>> Could it be a consequence of how REPEATABLE READ transactions handle
>> snapshots? With REPEATABLE READ the snapshot is acquired only once at the
>> beginning of a transaction; a READ COMMITTED transaction re-evaluates its
>> snapshot with each new command.
> 
> I bet that's exactly it.

I still don’t fully understand why is it so (the problem occurs while running 
only one SELECT-statement in READ COMMITED so only one snapshot is taken), but 
if is expected behavior shouldn’t the documentation mention that using READ 
COMMITED (which is the default) you may still get conflicts with recovery while 
using replication slots?

> 
> -- 
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company


--
May the force be with you…
https://simply.name



Re: [HACKERS] [DESIGN] ParallelAppend

2015-10-30 Thread Robert Haas
On Wed, Oct 28, 2015 at 3:55 PM, Kouhei Kaigai  wrote:
> At PGconf.EU, I could have a talk with Robert about this topic,
> then it became clear we have same idea.
>
>> ++
>> |sub-plan |   * Sub-Plan 1 ... Index Scan on p1
>> |index on *-> * Sub-Plan 2 ... PartialSeqScan on p2
>> |shared   |   * Sub-Plan 2 ... PartialSeqScan on p2
>> |memory   |   * Sub-Plan 2 ... PartialSeqScan on p2
>> +-+   * Sub-Plan 3 ... Index Scan on p3
>>
> In the above example, I put non-parallel sub-plan to use only
> 1 slot of the array, even though a PartialSeqScan takes 3 slots.
> It is a strict rule; non-parallel aware sub-plan can be picked
> up once.
> The index of sub-plan array is initialized to 0, then increased
> to 5 by each workers when it processes the parallel-aware Append.
> So, once a worker takes non-parallel sub-plan, other worker can
> never take the same slot again, thus, no duplicated rows will be
> produced by non-parallel sub-plan in the parallel aware Append.
> Also, this array structure will prevent too large number of
> workers pick up a particular parallel aware sub-plan, because
> PartialSeqScan occupies 3 slots; that means at most three workers
> can pick up this sub-plan. If 1st worker took the IndexScan on
> p1, and 2nd-4th worker took the PartialSeqScan on p2, then the
> 5th worker (if any) will pick up the IndexScan on p3 even if
> PartialSeqScan on p2 was not completed.

Actually, this is not exactly what I had in mind.  I was thinking that
we'd have a single array whose length is equal to the number of Append
subplans, and each element of the array would be a count of the number
of workers executing that subplan.  So there wouldn't be multiple
entries for the same subplan, as you propose here.  To distinguish
between parallel-aware and non-parallel-aware plans, I plan to put a
Boolean flag in the plan itself.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] September 2015 Commitfest

2015-10-30 Thread Tom Lane
Andres Freund  writes:
> On 2015-10-31 00:42:54 +0100, Michael Paquier wrote:
>> On Fri, Oct 30, 2015 at 2:02 PM, Robert Haas wrote:
>>> Gosh, that's a lot of stuff that didn't get reviewed.  :-(

>> Yep.

> Yea, this is probably one of the worst commitfests ever from the point
> of reviewer participation.

FWIW, I'm expecting to be rather less AWOL for upcoming 'fests than
I have been for the last year or so.  I don't think I can fix this
problem by myself, though.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2015-10-30 Thread Amit Kapila
On Thu, Oct 8, 2015 at 11:05 PM, Simon Riggs  wrote:
>
> On 1 October 2015 at 23:30, Josh Berkus  wrote:
>>
>> On 10/01/2015 07:43 AM, Robert Haas wrote:
>> > On Thu, Oct 1, 2015 at 9:44 AM, Fujii Masao 
wrote:
>> >> I wonder how much it's worth renaming only the file extension while
>> >> there are many places where "visibility map" and "vm" are used,
>> >> for example, log messages, function names, variables, etc.
>> >
>> > I'd be inclined to keep calling it the visibility map (vm) even if it
>> > also contains freeze information.
>> >

What is your main worry about changing the name of this map, is it
about more code churn or is it about that we might introduce new issues
or is it about that people are already accustomed to call this map as
visibility map?

>>
>> -1 to rename.  Visibility Map is a perfectly good name.
>
>
> The name can stay the same, but specifically the file extension should
change.
>

It seems to me quite logical for understanding purpose as well.  Any new
person who wants to work in this area or is looking into it will always
wonder why this map is named as visibility map even though it contains
information about visibility of page as well as frozen state of page.  So
even though it doesn't make any difference in correctness of feature whether
we retain the current name or change it to Visibility & Freeze Map (aka
vfm),
but I think it makes sense to change it for the sake of maintenance of this
code.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Parallel Seq Scan

2015-10-30 Thread Noah Misch
On Wed, Oct 28, 2015 at 01:04:12AM +0100, Robert Haas wrote:
> Well, OK.  That's not strictly a correctness issue, but here's an
> updated patch along the lines you suggested.


> Finally, have setup_param_list set a new ParamListInfo field,
> paramMask, to the parameters actually used in the expression, so that
> we don't try to fetch those that are not needed when serializing a
> parameter list.  This isn't necessary for performance, but it makes

s/performance/correctness/

> the performance of the parallel executor code comparable to what we
> do for cases involving cursors.

With that, the patch is ready.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] September 2015 Commitfest

2015-10-30 Thread Michael Paquier
On Sat, Oct 31, 2015 at 12:55 AM, Andres Freund  wrote:
> On 2015-10-31 00:42:54 +0100, Michael Paquier wrote:
>> On Fri, Oct 30, 2015 at 2:02 PM, Robert Haas wrote:
>> > On Fri, Oct 30, 2015 at 10:47 AM, Michael Paquier wrote:
>> >> On Thu, Oct 22, 2015 at 9:21 AM, Andres Freund wrote:
>>  Among the five patches marked as ready for committer, one is a bug fix
>>  that should be back-patched (ahem). Shouldn't we move on with those
>>  entries first?
>> >>>
>> >>> I think at this point we essentially can just move all entries to the
>> >>> next. Will do that, and note down which patches haven't gotten any real
>> >>> review.
>> >>
>> >> We are close to the end of the month. Should I move on to do the
>> >> vacuuming or are you planning to do it? At this stage, to be fair with
>> >> people whose patches are in "waiting on author" state and because
>> >> there is not much time until the next CF begins, I propose to remove
>> >> all the remaining 43 entries with the same status as currently listed:
>> >> Needs review: 26. Waiting on Author: 11. Ready for Committer: 6.
>>
>> So, seeing nothing happening I have done the above, opened 2015-11 CF
>> and closed the current one.
>
> You seemingly moved all entries, even the ones which were
> waiting-on-author for a long while, over? I think we should return items
> on there with lot of prejudice. Otherwise we're never going to get
> anywhere.

I know. We should normally begin the cleanup activity far earlier IMO,
like at the end of the commit fest month to give patch authors a
couple of weeks to rework what they have if they would like to resend
something for the next commit fest. At this stage this seems a little
bit too abrupt to just return with feedback patches without notice,
this gives patch authors no room to submit new patches, assuming that
authors were waiting for the patch to be marked as returned with
feedback to move on to a new approach suggested by the reviewers.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-10-30 Thread Amit Kapila
On Sat, Oct 31, 2015 at 2:50 AM, Merlin Moncure  wrote:
>
> Idle hanging transactions from poorly written applications are the
> bane of my existence.  Several months back one of them took down one
> of hour production websites for several hours.
>
> Unfortunately, the only way to deal with them is to terminate the
> backend which is heavy handed and in some cases causes further damage.
>   Something like pg_cancel_transaction(pid) would be nice; it would
> end the transaction regardless if in an actual statement or not.
>

Why pg_cancel_backend(pid) is not sufficient for the above use case?
Basically you want to rollback current transaction, I think that can be
achieved by pg_cancel_backend.

> Similarly, transaction_timeout would be a lot more effective than
> statement_timeout.
>

I think here by transaction_timeout you mean to say cancel all
transactions that are idle for transaction_timeout time.  So it is better
to call it as transaction_idle_timeout.  Having said that I am not sure
if holding such a connection is meaningful either because I think there
is high probablity that user of such a session might not perform any further
action for a long time, so why not have idle_timeout to indicate the
termination
of session if it is idle for idle_timeout time.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] [ADMIN] Replication slots and isolation levels

2015-10-30 Thread Andres Freund
On 2015-10-30 13:42:19 +0100, Michael Paquier wrote:
> On Fri, Oct 30, 2015 at 12:40 PM, Vladimir Borodin wrote:
> > On Thu, Oct 29, 2015 at 3:29 PM, Oleksii Kliukin wrote:
> >> Could it be a consequence of how REPEATABLE READ transactions handle
> >> snapshots? With REPEATABLE READ the snapshot is acquired only once at the
> >> beginning of a transaction; a READ COMMITTED transaction re-evaluates its
> >> snapshot with each new command.
> >
> > I still don’t fully understand why is it so (the problem occurs while
> > running only one SELECT-statement in READ COMMITED so only one snapshot is
> > taken), but if is expected behavior shouldn’t the documentation mention that
> > using READ COMMITTED (which is the default) you may still get conflicts with
> > recovery while using replication slots?
> 
> Replication slots and hot_standby_feedback are two different unrelated
> concepts, slots being aimed at retaining WAL.

Uh. Slots also retain the xmin horizon if hot_standby_feedback is
enabled on the standby?

> I guess that's the origin of your confusion:
> http://www.postgresql.org/message-id/20150616192141.gd2...@alap3.anarazel.de

That just says what I said above, I don't see how this makes replication
slots and hs feedback unrelated?

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [ADMIN] Replication slots and isolation levels

2015-10-30 Thread Robert Haas
On Fri, Oct 30, 2015 at 12:40 PM, Vladimir Borodin  wrote:
> I still don’t fully understand why is it so (the problem occurs while
> running only one SELECT-statement in READ COMMITED so only one snapshot is
> taken), but if is expected behavior shouldn’t the documentation mention that
> using READ COMMITED (which is the default) you may still get conflicts with
> recovery while using replication slots?

Are you doing BEGIN / one or more SELECT statements / END?

Or just a bare SELECT with no explicit transaction control?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Fix parallel workers connection bug in pg_dump (Bug #13727)

2015-10-30 Thread Zeus Kronion
I'm still unclear on how to write regression tests for a connectivity bug.
Are they necessary in this case?

On Sun, Oct 25, 2015 at 5:55 PM, Zeus Kronion  wrote:

> Parallel workers were failing to connect to the database when running
> pg_dump with a connection string. The first of the following two commands
> runs without errors, while the second one fails:
> pg_dump "postgres://my-user:my-passw...@my.hostname.com:5432/my-db" -Fd
> -f my-dump
> pg_dump "postgres://my-user:my-passw...@my.hostname.com:5432/my-db" -Fd
> --jobs=9 -f my-dump
>
> The error message:
> pg_dump: [parallel archiver] connection to database "my-db" failed:
> fe_sendauth: no password supplied
>
> The password is not being stored correctly in the PGconn object when
> connecting with a connection string.
>
> This is my first time contributing to Postgres, so I tried to stick to the
> instructions from the "Submitting a Patch" wiki. This submission is for
> discussion because I haven't figured out how to write regression tests for
> this patch yet (and I would appreciate guidance).
>
> Target branch: master
> Compiles and tests successfully: true
> Platform-specific items: none
> Regression tests: still needed
> Documentation: N/A
> Performance implications: none
>


Re: [HACKERS] Dangling Client Backend Process

2015-10-30 Thread Tom Lane
Robert Haas  writes:
> Hmm.  ProcessInterrupts() signals some FATAL errors while the
> connection is idle, and rumor has it that that works: the client
> doesn't immediately read the FATAL error, but the next time it sends a
> query, it tries to read from the connection and sees the FATAL error
> at that time.  I wonder why that's not working here.

A likely theory is that the kernel is reporting failure to libpq's
send() because the other side of the connection is already gone.
This would be timing-dependent of course.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2015-10-30 Thread Victor Wagner
On Fri, 30 Oct 2015 14:26:45 +0100
Robert Haas  wrote:

> On Thu, Oct 29, 2015 at 8:29 PM, Peter Eisentraut 
> wrote:

> 
> That's true, but doesn't allowing every parameter to be multiply
> specified greatly increase the implementation complexity for a pretty
> marginal benefit?  I think host and IP would hit 98% of the use cases
> here.

As far as I can tell from the experience of writing this patch, it
would greatly increase complexity.

If there should be only need to have multiple hosts, I could almost
completely incapsulate changes into DNS resolving code (which already
allows to handle several addresses). Need to support different port for
each host already required change of internal storage, and as a
consequence changes in the regression test suite
(src/interfaces/libpq/test/regress.out)

But both host and port are used in the same place - in the connect
system call. If we add possibility to different values per host for some
parameter, such as database name, which should be used significantly
later, i.e. during sending of first protocol message, size of patch
would grow may be twice.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump

2015-10-30 Thread rafael


On 10/29/2015 03:51 PM, Tom Lane wrote:
> We don't need hasty patches.  What we need is a re-think of the division
> of labor between pg_dump and pg_dumpall.  Up to now, pg_dump has only been
> charged with dumping/restoring the data "inside" an individual database,
> not with handling any database-level properties.  Those are the
> responsibility of pg_dumpall.
> 

Hello

A wiki page with some proposals to improve pg_dump can be found here:
https://wiki.postgresql.org/wiki/Pg_dump_improvements

It was created sometime ago after a discussion on pgsql-hackers. Refs on
the wikipage.

regards
-- 
Rafael Martinez Guerrero
Center for Information Technology
University of Oslo, Norway

PGP Public Key: http://folk.uio.no/rafael/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] September 2015 Commitfest

2015-10-30 Thread Robert Haas
On Fri, Oct 30, 2015 at 10:47 AM, Michael Paquier
 wrote:
> On Thu, Oct 22, 2015 at 9:21 AM, Andres Freund wrote:
>>> Among the five patches marked as ready for committer, one is a bug fix
>>> that should be back-patched (ahem). Shouldn't we move on with those
>>> entries first?
>>
>> I think at this point we essentially can just move all entries to the
>> next. Will do that, and note down which patches haven't gotten any real
>> review.
>
> We are close to the end of the month. Should I move on to do the
> vacuuming or are you planning to do it? At this stage, to be fair with
> people whose patches are in "waiting on author" state and because
> there is not much time until the next CF begins, I propose to remove
> all the remaining 43 entries with the same status as currently listed:
> Needs review: 26. Waiting on Author: 11. Ready for Committer: 6.

Gosh, that's a lot of stuff that didn't get reviewed.  :-(

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2015-10-30 Thread Robert Haas
On Thu, Oct 29, 2015 at 8:29 PM, Peter Eisentraut  wrote:
> On 10/28/15 4:18 AM, Victor Wagner wrote:
>> On Mon, 26 Oct 2015 16:25:57 -0400
>> Peter Eisentraut  wrote:
>>
>>> Also, this assumes that all the components other than host and port
>>> are the same.  Earlier there was a discussion about why the ports
>>> would ever need to be different.  Well, why can't the database names
>>> be different? I could have use for that.
>>
>> Because of way postgresql replication is implemented.
>
> There are multiple types of PostgreSQL replication, and there will be
> others in the future.

That's true, but doesn't allowing every parameter to be multiply
specified greatly increase the implementation complexity for a pretty
marginal benefit?  I think host and IP would hit 98% of the use cases
here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [ADMIN] Replication slots and isolation levels

2015-10-30 Thread Vladimir Borodin

> 30 окт. 2015 г., в 16:04, Robert Haas  написал(а):
> 
> On Fri, Oct 30, 2015 at 12:40 PM, Vladimir Borodin  wrote:
>> I still don’t fully understand why is it so (the problem occurs while
>> running only one SELECT-statement in READ COMMITED so only one snapshot is
>> taken), but if is expected behavior shouldn’t the documentation mention that
>> using READ COMMITED (which is the default) you may still get conflicts with
>> recovery while using replication slots?
> 
> Are you doing BEGIN / one or more SELECT statements / END?
> 
> Or just a bare SELECT with no explicit transaction control?

I’ve tried two ways - bare SELECT in autocommit mode and BEGIN; SELECT; 
ROLLBACK. I first described the problem in thread on pgsql-admin@ [0], there is 
copy-paste from psql there, but during conversation initial description was 
lost.

[0] 
http://www.postgresql.org/message-id/7f74c5ea-6741-44fc-b6c6-e96f18d76...@simply.name

> 
> -- 
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
> 
> 
> -- 
> Sent via pgsql-admin mailing list (pgsql-ad...@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin


--
Да пребудет с вами сила…
https://simply.name/ru



Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics

2015-10-30 Thread Alexander Korotkov
On Thu, Oct 29, 2015 at 8:18 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:

> On Thu, Sep 24, 2015 at 6:36 PM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
>> On Thu, Sep 24, 2015 at 6:32 PM, Andres Freund 
>> wrote:
>>
>>> On 2015-09-15 20:16:10 +0300, YUriy Zhuravlev wrote:
>>> > We will be tested.
>>>
>>> Did you have a chance to run some benchmarks?
>>>
>>
>> Yes, we now have 60 physical cores intel server and we're running
>> benchmarks on it.
>>
>
> We got a consensus with Andres that we should commit the CAS version first
> and look to other optimizations.
> Refactored version of atomic state patch is attached. The changes are
> following:
> 1) Macros are used for access refcount and usagecount.
> 2) likely/unlikely were removed. I think introducing of likely/unlikely
> should be a separate patch since it touches portability. Also, I didn't see
> any performance effect of this.
> 3) LockBufHdr returns the state after taking lock. Without using atomic
> increments it still can save some loops on skip atomic value reading.
>

pinunpin-cas-original-fix.patch is just original patch by Andres Freund
with fixed bug which causes hang.
Performance comparison on 72-cores Intel server in attached. On this
machine we see no regression in version of patch in previous letter.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
n_clients,master,pinunpin-cas-original-fix,pinunpin-cas
   1,   18860,   19421,   19972
   8,  162503,  166901,  164363
  16,  316429,  321103,  319214
  36,  627215,  641028,  642854
  56,  745108,  774195,  786642
  90,  847150,  995036, 1022657
 100,  802922, 1077732, 1083769
 110,  615070, 1014446, 1034496
 120,  611956, 1074345, 1094610
 130,  571697, 1082626, 1100578
 140,  579909, 1075855, 1092749
 150,  540442, 1070737, 1089283
 160,  545942, 1068991, 1096360
 170,  515444, 1073508, 1089891
 180,  520867, 1076281, 1093987
 190,  488836, 1071283, 1097871
 200,  493795, 1082849, 1103040
 210,  463765,  967185,  988692
 220,  467661,  972180,  993521
 230,  437297,  976865,  980113
 250,  420711,  956151,  980965


pinunpin-cas-original-fix.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Fix parallel workers connection bug in pg_dump (Bug #13727)

2015-10-30 Thread Euler Taveira

On 30-10-2015 10:04, Zeus Kronion wrote:

I'm still unclear on how to write regression tests for a connectivity
bug. Are they necessary in this case?

There aren't regression tests for pg_dump. However, your instructions 
are sufficient to demonstrate the bug.


You could continue the thread in -bugs because the discussion started 
there. Sometimes people track -bugs ML to make sure that some bugs 
aren't forgotten. Add your patch to the next CF [1].



[1] https://commitfest.postgresql.org/7/


--
   Euler Taveira   Timbira - http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics

2015-10-30 Thread Andres Freund
Hi,

Thanks for benchmarking!

On 2015-10-30 16:28:22 +0300, Alexander Korotkov wrote:
> pinunpin-cas-original-fix.patch is just original patch by Andres Freund
> with fixed bug which causes hang.
> Performance comparison on 72-cores Intel server in attached. On this
> machine we see no regression in version of patch in previous letter.

So pinunpin-cas-original-fix is my version with a bug fixed, and
pinunpin-cas is what exactly? Your earlier version with the xadd +
cmpxchg?

The results look pretty good. Could you give a few more details about
the hardware and workload (i.e. cpu model number + scale)?

So the plan would be to finish cleaning this up into a committable
shape?

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [ADMIN] Replication slots and isolation levels

2015-10-30 Thread Michael Paquier
On Fri, Oct 30, 2015 at 12:40 PM, Vladimir Borodin wrote:
> On Thu, Oct 29, 2015 at 3:29 PM, Oleksii Kliukin wrote:
>> Could it be a consequence of how REPEATABLE READ transactions handle
>> snapshots? With REPEATABLE READ the snapshot is acquired only once at the
>> beginning of a transaction; a READ COMMITTED transaction re-evaluates its
>> snapshot with each new command.
>
> I still don’t fully understand why is it so (the problem occurs while
> running only one SELECT-statement in READ COMMITED so only one snapshot is
> taken), but if is expected behavior shouldn’t the documentation mention that
> using READ COMMITTED (which is the default) you may still get conflicts with
> recovery while using replication slots?

Replication slots and hot_standby_feedback are two different unrelated
concepts, slots being aimed at retaining WAL. I guess that's the
origin of your confusion:
http://www.postgresql.org/message-id/20150616192141.gd2...@alap3.anarazel.de
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2015-10-30 Thread Erik Rijkers



[2015082503-pgconfig_controldata.diff]


I tried to build this, and the patch applies cleanly but then a ld error 
emerges:


(The first four lines (about gram.y) are standard warnings; the error 
starts from the ld line)



In file included from gram.y:14908:0:
scan.c: In function ‘yy_try_NUL_trans’:
scan.c:10307:23: warning: unused variable ‘yyg’ [-Wunused-variable]
 struct yyguts_t * yyg = (struct yyguts_t*)yyscanner; /* This var 
may be unused depending upon options. */

   ^
/usr/bin/ld: Dwarf Error: found dwarf version '4', this reader only 
handles version 2 information.

utils/fmgrtab.o:(.rodata+0x19f78): undefined reference to `_null_'
utils/fmgrtab.o:(.rodata+0x1a078): undefined reference to `_null_'
collect2: error: ld returned 1 exit status
make[2]: *** [postgres] Error 1
make[1]: *** [all-backend-recurse] Error 2
make: *** [all-src-recurse] Error 2



The configure was:

./configure \
 --prefix=/var/data1/pg_stuff/pg_installations/pgsql.controldata \
 --with-pgport=6594 \
 
--bindir=/var/data1/pg_stuff/pg_installations/pgsql.controldata/bin.fast 
\
 
--libdir=/var/data1/pg_stuff/pg_installations/pgsql.controldata/lib.fast 
\
 --sysconfdir=/var/data1/pg_stuff/pg_installations/pgsql.controldata/etc 
\
 --quiet --enable-depend --with-perl --with-python --with-openssl 
--with-libxml \
 --with-extra-version=_controldata_20151030_1432_c5057b2b3481 
--enable-tap-tests





Thanks,

Erik Rijkers







--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Did the "Full-text search in PostgreSQL in milliseconds" patches land?

2015-10-30 Thread Oleg Bartunov
On Thu, Oct 29, 2015 at 5:31 PM, Colin 't Hart  wrote:

> Hi,
>
> I've been reading
>
> wiki.postgresql.org/images/2/25/Full-text_search_in_PostgreSQL_in_milliseconds-extended-version.pdf
> with interest and am wondering if these patches ever made it in to the
> "official" version of Postgresql?
>

not all patches have committed. There are two more patches we need to
convince community to accept. We have them rebased for head.



>
> I've tried doing some of the queries as described in the slides using
> 9.5b1 but I get the "No operator matches the given name and argument
> type(s)." error.
>
> Thanks,
>
> Colin
>


Re: [HACKERS] Dangling Client Backend Process

2015-10-30 Thread Tom Lane
Andres Freund  writes:
> adding a parseInput(conn) into the loop yields the expected
> FATAL:  57P01: terminating connection due to unexpected postmaster exit
> Is there really any reason not to do that?

Might work, but it probably needs some study:
(a) is it safe
(b) is this the right place / are there other places

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2015-10-30 Thread Christopher Browne
On 30 October 2015 at 09:26, Robert Haas  wrote:
>
> On Thu, Oct 29, 2015 at 8:29 PM, Peter Eisentraut  wrote:
> > On 10/28/15 4:18 AM, Victor Wagner wrote:
> >> On Mon, 26 Oct 2015 16:25:57 -0400
> >> Peter Eisentraut  wrote:
> >>
> >>> Also, this assumes that all the components other than host and port
> >>> are the same.  Earlier there was a discussion about why the ports
> >>> would ever need to be different.  Well, why can't the database names
> >>> be different? I could have use for that.
> >>
> >> Because of way postgresql replication is implemented.
> >
> > There are multiple types of PostgreSQL replication, and there will be
> > others in the future.
>
> That's true, but doesn't allowing every parameter to be multiply
> specified greatly increase the implementation complexity for a pretty
> marginal benefit?  I think host and IP would hit 98% of the use cases
> here.

I think it makes the feature WORSE.  I am getting more and more convinced
that the Correct Solution is for this feature to be handled by submitting
multiple URIs, and my argument isn't even based on any aspects of
implementation complexity.

Take as example the case where I have two database servers I want to
be considered.

a) Database with URI
   postgresql://cbbro...@server-a.example.info:5432/my-first-database

b) Database with URL
   postgresql://rob...@server-b.example.info:7032/my-second-database

With all the "per-variable multiplicities", this would turn into a
combinatorial explosion of combinations, some 2^4, or 16 possible servers,
some of which likely don't exist, and others of which aren't proper (e.g. -
trying to connect to database a) using robert's credentials).

Possibly some of those combinations are outright improper.

I'm not going to claim it's terribly wise to be doing the cross-credential
bit; I'd think it likely to be rather dumb for the application to use one
user when connecting to one database and another when connecting to
another.  But the notion of it being nondeterministic is just awful.

I head back to... "the way OpenLDAP does this"...
  They let you specify multiple LDAP servers...
ldap://server1.example.info:389 ldap://server2.example.info:389
where URIs are separated by whitespace.

Seems like Best Goodness for Postgres to do something analogous...
   postgresql://cbbro...@server-a.example.info:5432/my-first-database
postgresql://rob...@server-b.example.info:7032/my-second-database

Is it a bit long?  Sure.  But it is unambiguous, and requires *only*
whitespace parsing to separate the URIs.  I'd think it fine for Postgres to
support this via multiple "-d" options; that would be about as good.

That can cover 100% of cases, and I don't see it needing to interact
specially with odd bits such as "was that a replication slot?"  You can
always choose to shoot yourself in the foot, but this doesn't spin the
roulette for you...
--
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"


Re: [HACKERS] extend pgbench expressions with functions

2015-10-30 Thread Shulgin, Oleksandr
On Fri, Sep 18, 2015 at 10:21 AM, Fabien COELHO  wrote:

>
> Hello Kyotaro-san,
>
> My description should have been obscure. Indeed the call tree is
>> finite for *sane* expression node. But it makes infinit call for
>> a value of expr->etype unknown by both evalDouble and
>> evalInt.
>>
>
> Such issue would be detected if the function is actually tested, hopefully
> this should be the case... :-)
>
> However I agree that relying implicitely on the "default" case is not very
> good practice, so I updated the code in the attached v11 to fail
> explicitely on such errors.
>
> I also attached a small test script, which exercises most (all?) functions:
>
>   ./pgbench -f functions.sql -t 1
>

A short review from me:

1. Patch applies cleanly on current HEAD.
2. It compiles without errors or warnings.
3. The attached test case can be executed w/o symptoms of any problem and
it produces meaningful results.

Should we not allow for functions taking 0 arguments?  Since we're already
into some math here, how about pi()? ;-)

I understand requiring at least 1 arg simplifies the code a bit, but right
now it reports syntax error for "random()", while it correctly reports
unexpected number of arguments for "random(1,2,3)".  We would need another
check for min() and max() which expect >=1 arguments, but it's easy to add.

I would also argue that we should rename "random" to "rand" here to avoid
confusion with the familiar SQL function "random()" that doesn't take
arguments.

--
Alex


Re: [HACKERS] Dangling Client Backend Process

2015-10-30 Thread Andres Freund
On 2015-10-30 09:48:33 -0400, Tom Lane wrote:
> Robert Haas  writes:
> > Hmm.  ProcessInterrupts() signals some FATAL errors while the
> > connection is idle, and rumor has it that that works: the client
> > doesn't immediately read the FATAL error, but the next time it sends a
> > query, it tries to read from the connection and sees the FATAL error
> > at that time.  I wonder why that's not working here.
>
> A likely theory is that the kernel is reporting failure to libpq's
> send() because the other side of the connection is already gone.
> This would be timing-dependent of course.

Looking at a strace psql over unix socket is actually receiving the
error message:
recvfrom(3, "E\0\0\0lSFATAL\0C57P01\0Mterminating "..., 16384, 0, NULL, NULL) = 
109
but psql does print:
server closed the connection unexpectedly

it happens to work over localhost:
FATAL:  57P01: terminating connection due to unexpected postmaster exit
LOCATION:  secure_read, be-secure.c:170
server closed the connection unexpectedly
   This probably means the server terminated abnormally
   before or while processing the request.

the problem seems to be the loop eating all the remaining input:
void
pqHandleSendFailure(PGconn *conn)
{
/*
 * Accept any available input data, ignoring errors.  Note that if
 * pqReadData decides the backend has closed the channel, it will close
 * our side of the socket --- that's just what we want here.
 */
while (pqReadData(conn) > 0)
 /* loop until no more data readable */ ;

after the first pqReadData() there's no remaining input and thus the
second call to pqReadData()'s pqsecure_read reads 0 and this is hit:
/*
 * OK, we are getting a zero read even though select() says ready. This
 * means the connection has been closed.  Cope.
 */
definitelyEOF:
printfPQExpBuffer(>errorMessage,
  libpq_gettext(
"server closed 
the connection unexpectedly\n"
   "\tThis probably means the server terminated 
abnormally\n"
 "\tbefore or while 
processing the request.\n"));

adding a parseInput(conn) into the loop yields the expected
FATAL:  57P01: terminating connection due to unexpected postmaster exit

Is there really any reason not to do that?

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dangling Client Backend Process

2015-10-30 Thread Andres Freund
On 2015-10-30 10:57:45 -0400, Tom Lane wrote:
> Andres Freund  writes:
> > adding a parseInput(conn) into the loop yields the expected
> > FATAL:  57P01: terminating connection due to unexpected postmaster exit
> > Is there really any reason not to do that?
> 
> Might work, but it probably needs some study:

Yea, definitely. I was just at pgconf.eu's keynote catching up on a
talk. No fully thought through proposal's to be expected ;)

> (a) is it safe

I don't immediately see why not.

> (b) is this the right place / are there other places

I think it's ok for the send failure case, in a quick lookthrough I
didn't find anything else for writes - I'm not entirely sure all read
cases are handled tho, but it seems less likely to be mishandles.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ExclusiveLock on PostgreSQL - Fabio Mendonça

2015-10-30 Thread Fabio Oliveira De Mendonca

Thanks Robert. 

I read the material link and did help me to take a new decision

thank you.  
att.

Fabio Mendonça




De: Robert Haas 
Enviado: sexta-feira, 30 de outubro de 2015 07:49
Para: Fabio Oliveira De Mendonca
Cc: k...@it.is.rice.edu; gsst...@mit.edu; pgsql-hackers@postgresql.org; 
fabio.mendonca@gmail.com
Assunto: Re: [HACKERS] ExclusiveLock on PostgreSQL - Fabio Mendonça

On Wed, Oct 28, 2015 at 5:59 PM, Fabio Oliveira De Mendonca
 wrote:
> I 've a process with 600.000 rows, for insert on table "A" with 130 columns
> and I'm received  the "Exclusivelock"   error message, making lost some
> rows during transaction.  The insert of transaction occurs on each  2 min.
> and for each 1 min, a second process read the table "A" (with Join Table "C"
> using  PK ) to make a insert on a table  ("B") . Well ,  I did think create
> a partitions on table "A",  but I don't believe get a correcting in the
> problem ( "Exclusivelock" ).

This isn't really the right mailing list for this question.

You might find 
https://listas.postgresql.org.br/cgi-bin/mailman/listinfo/pgbr-geral
helpful, or you can ask at
http://www.postgresql.org/list/pgsql-general/

You should also read
https://wiki.postgresql.org/wiki/Guide_to_reporting_problems --
because this report does not contain enough information for someone to
answer your question.  In particular, including the exact text of any
commands you executed and any error or other messages the system
generated would be helpful.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Fix parallel workers connection bug in pg_dump (Bug #13727)

2015-10-30 Thread Michael Paquier
On Fri, Oct 30, 2015 at 2:42 PM, Euler Taveira  wrote:
> On 30-10-2015 10:04, Zeus Kronion wrote:
>>
>> I'm still unclear on how to write regression tests for a connectivity
>> bug. Are they necessary in this case?
>>
> There aren't regression tests for pg_dump. However, your instructions are
> sufficient to demonstrate the bug.

Well, we could have something in pg_dump/t/, though the instance set
by standard_initdb would require some update in pg_hba.conf to switch
to md5 before running the dump.

> You could continue the thread in -bugs because the discussion started there.
> Sometimes people track -bugs ML to make sure that some bugs aren't
> forgotten. Add your patch to the next CF [1].

Yep. Things get easily lost.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Request: pg_cancel_backend variant that handles 'idle in transaction' sessions

2015-10-30 Thread Merlin Moncure
Idle hanging transactions from poorly written applications are the
bane of my existence.  Several months back one of them took down one
of hour production websites for several hours.

Unfortunately, the only way to deal with them is to terminate the
backend which is heavy handed and in some cases causes further damage.
  Something like pg_cancel_transaction(pid) would be nice; it would
end the transaction regardless if in an actual statement or not.
Similarly, transaction_timeout would be a lot more effective than
statement_timeout.  It's nice to think about a world where
applications don't do such things, but in this endless sea of
enterprise java soup I live it it's, uh, not realistic.  This would be
lot cleaner than the cron driven sweep I'm forced to implement now,
and could be made to be part of the standard configuration across the
enterprise.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [DESIGN] ParallelAppend

2015-10-30 Thread Kouhei Kaigai
> On Wed, Oct 28, 2015 at 3:55 PM, Kouhei Kaigai  wrote:
> > At PGconf.EU, I could have a talk with Robert about this topic,
> > then it became clear we have same idea.
> >
> >> ++
> >> |sub-plan |   * Sub-Plan 1 ... Index Scan on p1
> >> |index on *-> * Sub-Plan 2 ... PartialSeqScan on p2
> >> |shared   |   * Sub-Plan 2 ... PartialSeqScan on p2
> >> |memory   |   * Sub-Plan 2 ... PartialSeqScan on p2
> >> +-+   * Sub-Plan 3 ... Index Scan on p3
> >>
> > In the above example, I put non-parallel sub-plan to use only
> > 1 slot of the array, even though a PartialSeqScan takes 3 slots.
> > It is a strict rule; non-parallel aware sub-plan can be picked
> > up once.
> > The index of sub-plan array is initialized to 0, then increased
> > to 5 by each workers when it processes the parallel-aware Append.
> > So, once a worker takes non-parallel sub-plan, other worker can
> > never take the same slot again, thus, no duplicated rows will be
> > produced by non-parallel sub-plan in the parallel aware Append.
> > Also, this array structure will prevent too large number of
> > workers pick up a particular parallel aware sub-plan, because
> > PartialSeqScan occupies 3 slots; that means at most three workers
> > can pick up this sub-plan. If 1st worker took the IndexScan on
> > p1, and 2nd-4th worker took the PartialSeqScan on p2, then the
> > 5th worker (if any) will pick up the IndexScan on p3 even if
> > PartialSeqScan on p2 was not completed.
> 
> Actually, this is not exactly what I had in mind.  I was thinking that
> we'd have a single array whose length is equal to the number of Append
> subplans, and each element of the array would be a count of the number
> of workers executing that subplan.  So there wouldn't be multiple
> entries for the same subplan, as you propose here.  To distinguish
> between parallel-aware and non-parallel-aware plans, I plan to put a
> Boolean flag in the plan itself.
>
I don't have strong preference here. Both of design can implement the
requirement; none-parallel sub-plans are never picked up twice, and
parallel-aware sub-plans can be picked up multiple times.
So, I'll start with the above your suggestion.

Thanks,
--
NEC Business Creation Division / PG-Strom Project
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Implement failover on libpq connect level.

2015-10-30 Thread David Fetter
On Fri, Oct 30, 2015 at 11:29:09AM -0400, Christopher Browne wrote:
> On 30 October 2015 at 09:26, Robert Haas  wrote:
> >
> > On Thu, Oct 29, 2015 at 8:29 PM, Peter Eisentraut  wrote:
> > > On 10/28/15 4:18 AM, Victor Wagner wrote:
> > >> On Mon, 26 Oct 2015 16:25:57 -0400
> > >> Peter Eisentraut  wrote:
> > >>
> > >>> Also, this assumes that all the components other than host and port
> > >>> are the same.  Earlier there was a discussion about why the ports
> > >>> would ever need to be different.  Well, why can't the database names
> > >>> be different? I could have use for that.
> > >>
> > >> Because of way postgresql replication is implemented.
> > >
> > > There are multiple types of PostgreSQL replication, and there will be
> > > others in the future.
> >
> > That's true, but doesn't allowing every parameter to be multiply
> > specified greatly increase the implementation complexity for a pretty
> > marginal benefit?  I think host and IP would hit 98% of the use cases
> > here.
> 
> I think it makes the feature WORSE.  I am getting more and more convinced
> that the Correct Solution is for this feature to be handled by submitting
> multiple URIs, and my argument isn't even based on any aspects of
> implementation complexity.
> 
> Take as example the case where I have two database servers I want to
> be considered.
> 
> a) Database with URI
>postgresql://cbbro...@server-a.example.info:5432/my-first-database
> 
> b) Database with URL
>postgresql://rob...@server-b.example.info:7032/my-second-database
> 
> With all the "per-variable multiplicities", this would turn into a
> combinatorial explosion of combinations, some 2^4, or 16 possible servers,
> some of which likely don't exist, and others of which aren't proper (e.g. -
> trying to connect to database a) using robert's credentials).
> 
> Possibly some of those combinations are outright improper.

Let's say that the chances of their all both existing and being proper
are close enough to zero not to matter.

> Seems like Best Goodness for Postgres to do something analogous...
>postgresql://cbbro...@server-a.example.info:5432/my-first-database
> postgresql://rob...@server-b.example.info:7032/my-second-database

Yes.

> Is it a bit long?  Sure.  But it is unambiguous, and requires *only*
> whitespace parsing to separate the URIs.  I'd think it fine for Postgres to
> support this via multiple "-d" options; that would be about as good.

Indeed.  Is there any risk of losing ordering information in the
standard command line processing libraries?

> That can cover 100% of cases, and I don't see it needing to interact
> specially with odd bits such as "was that a replication slot?"  You can
> always choose to shoot yourself in the foot, but this doesn't spin the
> roulette for you...

An evocative image, if not a pretty one.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH v3] GSSAPI encryption support

2015-10-30 Thread Robbie Harwood
Andreas, can you please weigh in here since your voice is important to
this process?

Robbie Harwood  writes:

> Andres Freund  writes:
>
>> On 2015-10-22 16:47:09 +0900, Michael Paquier wrote:
>>> Hm, and that's why you chose this way of going. My main concern about
>>> this patch is that it adds on top of the existing Postgres protocol a
>>> layer to encrypt and decrypt the messages between server and client
>>> based on GSSAPI. All messages transmitted between client and server
>>> are changed to 'g' messages on the fly and switched back to their
>>> original state at reception. This is symbolized by the four routines
>>> you added in the patch in this purpose, two for frontend and two for
>>> backend, each one for encryption and decryption. I may be wrong of
>>> course, but it seems to me that this approach will not survive
>>> committer-level screening because of the fact that context-level
>>> things invade higher level protocol messages.
>>
>> Agreed. At least one committer here indeed thinks this approach is not
>> acceptable (and I've said so upthread).
>
> Okay, I'll make some changes.  Before I do, though, since this is not
> the approach I came up with, can you explicitly state what you're
> looking for here?  It subjectively seems that I'm getting a lot of
> feedback of "this feels wrong" without suggestion for improvement.
>
> To be clear, what I need to know is:
>
> - What changes do you want to see in the wire protocol?  (And how will
>   fallback be supported if that's affected?)
>
> - Since this seems to be an important sticking point, what files am I
>   encouraged to change (or prohibited from changing)?  (Fallback makes
>   this complex.)
>
> - I've been assuming that we care about fallback, but I'd like to be
>   told that it's something postgres actually wants to see because it's
>   the most intricate part of these changes.  (I'm reasonably confident
>   that the code becomes simpler without it, and I myself have no use for
>   it.)
>
> If I understand what you're asking for (and the above is intended to be
> sure that I will), this will not be a trivial rework, so I want to be
> really sure before doing that because writing this code a third time is
> something I don't relish.
>
> Thanks,
> --Robbie


signature.asc
Description: PGP signature


Re: [HACKERS] extend pgbench expressions with functions

2015-10-30 Thread Fabien COELHO


Here is a v12 which implements the suggestions below.


Should we not allow for functions taking 0 arguments?  Since we're already
into some math here, how about pi()? ;-)


Hmmm, why not.


I understand requiring at least 1 arg simplifies the code a bit, but right
now it reports syntax error for "random()", while it correctly reports
unexpected number of arguments for "random(1,2,3)".  We would need another
check for min() and max() which expect >=1 arguments, but it's easy to add.


Indeed, I had to add a special check.


I would also argue that we should rename "random" to "rand" here to avoid
confusion with the familiar SQL function "random()" that doesn't take
arguments.


Why not, as it is also consistent with exporand() & gaussrand().

--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index 0ac40f1..3278e77 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.sgml
@@ -771,17 +771,20 @@ pgbench  options  dbname
   Sets variable varname to an integer value calculated
   from expression.
   The expression may contain integer constants such as 5432,
-  references to variables :variablename,
+  double constants such as 3.14156,
+  references to integer variables :variablename,
   and expressions composed of unary (-) or binary operators
   (+, -, *, /, %)
-  with their usual associativity, and parentheses.
+  with their usual associativity, function calls and parentheses.
+   shows the available
+  functions.
  
 
  
   Examples:
 
 \set ntellers 10 * :scale
-\set aid (1021 * :aid) % (10 * :scale) + 1
+\set aid (1021 * rand(1, 10 * :scale)) % (10 * :scale) + 1
 
 

@@ -931,18 +934,117 @@ pgbench  options  dbname

   
 
+   
+   
+PgBench Functions
+
+ 
+  
+   Function
+   Return Type
+   Description
+   Example
+   Result
+  
+ 
+ 
+  
+   abs(a)
+   same as a
+   integer or double absolute value
+   abs(-17)
+   17
+  
+  
+   ddebug(x)
+   double
+   stderr print for debug and return argument
+   ddebug(5432.1)
+   5432.1
+  
+  
+   double(i)
+   double
+   evaluate as int and cast to double
+   double(5432)
+   5432.0
+  
+  
+   exporand(i, j, t)
+   integer
+   exponentially distributed random integer in the bounds, see below
+   exporand(1, 10, 3.0)
+   int between 1 and 10
+  
+  
+   idebug(i)
+   integer
+   stderr print for debug and return argument
+   idebug(5432)
+   5432
+  
+  
+   int(x)
+   integer
+   evaluate as double and cast to int
+   int(5.4 + 3.8)
+   9
+  
+  
+   gaussrand(i, j, t)
+   integer
+   gaussian distributed random integer in the bounds, see below
+   gaussrand(1, 10, 2.5)
+   int between 1 and 10
+  
+  
+   min(i, ...)
+   integer
+   minimum value
+   min(5, 4, 3, 2)
+   2
+  
+  
+   max(i, ...)
+   integer
+   maximum value
+   max(5, 4, 3, 2)
+   5
+  
+  
+   pi()
+   double
+   value of the PI constant
+   pi()
+   3.14159265358979323846
+  
+  
+   rand(i, j)
+   integer
+   uniformly distributed random integer in the bounds
+   rand(1, 10)
+   int between 1 and 10
+  
+  
+   sqrt(x)
+   double
+   square root
+   sqrt(2.0)
+   1.414213562
+  
+ 
+ 
+   
+
   
As an example, the full definition of the built-in TPC-B-like
transaction is:
 
 
-\set nbranches :scale
-\set ntellers 10 * :scale
-\set naccounts 10 * :scale
-\setrandom aid 1 :naccounts
-\setrandom bid 1 :nbranches
-\setrandom tid 1 :ntellers
-\setrandom delta -5000 5000
+\set aid rand(1, 10 * :scale)
+\set bid rand(1, 1 * :scale)
+\set tid rand(1, 10 * :scale)
+\set delta rand(-5000, 5000)
 BEGIN;
 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
diff --git a/src/bin/pgbench/exprparse.y b/src/bin/pgbench/exprparse.y
index e68631e..35f6c58 100644
--- a/src/bin/pgbench/exprparse.y
+++ b/src/bin/pgbench/exprparse.y
@@ -16,10 +16,14 @@
 
 PgBenchExpr *expr_parse_result;
 
+static PgBenchExprList *make_elist(PgBenchExpr *exp, PgBenchExprList *list);
 static PgBenchExpr *make_integer_constant(int64 ival);
+static PgBenchExpr *make_double_constant(double dval);
 static PgBenchExpr *make_variable(char *varname);
 static PgBenchExpr *make_op(char operator, PgBenchExpr *lexpr,
 		PgBenchExpr *rexpr);
+static int find_func(const char * fname);
+static PgBenchExpr *make_func(const int fnumber, PgBenchExprList *args);
 
 %}
 
@@ -29,15 +33,19 @@ static PgBenchExpr *make_op(char operator, PgBenchExpr *lexpr,
 %union
 {
 	int64		ival;
+	double		dval;
 	char	   *str;
 	PgBenchExpr *expr;
+	

Re: [HACKERS] Patch: Implement failover on libpq connect level.

2015-10-30 Thread Josh Berkus
On 10/30/2015 08:29 AM, Christopher Browne wrote:
> I think it makes the feature WORSE.  I am getting more and more convinced
> that the Correct Solution is for this feature to be handled by submitting
> multiple URIs, and my argument isn't even based on any aspects of
> implementation complexity.

+1

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re : Re: Re : Re: [HACKERS] UTF-32 support in PostgreSQL ?

2015-10-30 Thread fortin . christian
Now I received the authorization to give you an answer to the WHY question!
Because basicly, this project is classified TOP SECTRET.

Well, we know then we have no real avantage to use UTF-32 in comparaison to 
UTF-8.
But we need to establish a gateway between two huge networks.

One network is Internet, the other is ... named it extra-Internet.
Extra-Internet is older then the Internet that you already know.
It don't use IP protocol, but use a 32 bit per character encoding.
This 32 bit encoding is not UTF-32, but supports 40 languages. Languages which 
are not include in UTF-32.
The language which have the less characters, use 100 characters.
The bigger alphabet have 1 characters.
The most used language has 500 characters.

This extra-internet is as big as the actual Internet that you know.
This extra-Internet has not been built by USA, but by an other country.
Well, I try to convince peoples to use UTF-32.
I will need to ask to UNICODE to integrate the foreign 32 bits encoding in the 
future release of UTF-32.
And ask to the extra-internet authority, to integrate the UTF-32 in there 
standard 32 bits encoding.
I request to IETF to support UTF-32 int IPv6. I asked to w3.org to support 
UTF-32 in the future HTML format.
I plan to propose to the extra-Internet autority to upgrade to IPv6.
They actualy have problems with the availability of address, like we have with 
IPv4. The protocol they use is very basic, more basic than IPv4. And fail very 
often.

Well, hope it give answer to our question.

Le 26/10/15, Craig Ringer   a écrit :
> On 27 October 2015 at 05:39,  wrote:
> 
> > I mean for ALL, data stored, source code, and translation files.
> > For source code, I think then GCC must support UTF-32 before.
> 
> Why?
> 
> UTF-32 is an incredibly inefficient way to store text that's
> predominantly or entirely within the 7-bit ASCII space. UTF-8 is a
> much better way to handle it.
> 
> Anyway, while gcc supports sources encoded in utf-8 just fine, it's
> more typical to represent chars using byte escapes so that people with
> misconfigured text editors don't mangle them. It does not support
> utf-8 identifiers (variable names, function names, etc) containing
> characters outside the 7-bit ASCII space, but you can work around it
> with UCN if you need to; see the FAQ:
> 
> https://gcc.gnu.org/wiki/FAQ#What_is_the_status_of_adding_the_UTF-8_support_for_identifier_names_in_GCC.3F
> 
> I don't think the PostgreSQL project is likely to accept patches using
> characters outside the 7-bit ascii space in the near future, as
> compiler and text editor support is unfortunately still too primitive.
> We support a variety of legacy platforms and toolchains, many of which
> won't cope at all. There isn't a pressing reason, since at the user
> level the support for a wide variety of charsets (including all
> characters in the UTF-32 space) is already present.
> 
> I am aware this is a form of English-language privilege. Of course
> it's easy for me as an English first-language speaker to say "oh, we
> don't need support for your language in the code". It's also practical
> though - code in a variety of languages, so that no one person can
> read or understand all of it, is not maintainable in the long term.
> Especially when people join and leave the project. It's the same
> reason the project is picky about introducing new programming
> languages, even though it might be nice to be able to write parts of
> the system in Python, parts in Haskell, etc.
> 
> So I don't think we need UTF-32 source code support, or even full
> UTF-8 source code support, because even if we had it we probably
> wouldn't use it.
> 
> 
> > I sent an e-mail to Oracle to see what they tink about this huge idea.
> 
> I don't understand how this is a huge idea. The representation of the
> characters doesn't matter, so long as the DB can represent the full
> character suite. Right?
> 
> > Well, I know it's not efficient space wise, but this in the only way that we
> > can deployed worldwide.
> 
> UTF-8 is widely used worldwide and covers the full Unicode 32-bit code space.
> 
> I wonder if you are misunderstanding UTF-8 vs UCS-2 vs UTF-16 vs UTF-32.
> 
> UTF-8 is an encoding that can represent the full 32-bit Unicode space
> using escape sequences. It is endianness-independent. One character is
> a variable number of bytes, so lookups to find the n'th character,
> substring operations, etc are a bit ugly. UTF-8 is the character set
> used by most UNIX APIs.
> 
> UCS-2 is a legacy encoding that can represent the lower 16 bits of the
> Unicode space. It cannot represent the full 32-bit Unicode space. It
> has two different forms, little-endian and big-endian, so you have to
> include a marker to say which is which, or be careful about handling
> it in your code. It's easy to do n'th character lookups, substrings,
> etc.
> 
> UTF-16 is like UCS-2, but adds UTF-8-like escape sequences to handle
> 

Re: [HACKERS] ALTER ... OWNER TO ... vs. ALTER DEFAULT PRIVILEGES

2015-10-30 Thread Robert Haas
On Thu, Oct 29, 2015 at 10:31 PM, David Fetter  wrote:
> Had this been part of the original ALTER DEFAULT PRIVILEGES patch,
> those privileges would simply have been applied.  Since it wasn't, I'm
> ass-u-me'ing that changing the default behavior to that is going to
> cause (possibly legitimate) anxiety.

The word "applied" is not very clear here.  You want to revoke all
existing privileges and then regrant whatever the default privileges
would have been given the new owner?  That might be a reasonable thing
to have a command for, but doing it automatically on an owner change
does not sound like a good idea.  That could be very surprising
behavior.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Patch to install config/missing

2015-10-30 Thread Jim Nasby
Currently, config/missing isn't being installed. This can lead to 
confusing error messages, such as if Perl isn't found and something 
needs it [1]. Attached patch adds it to install and uninstall recipes.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
diff --git a/config/Makefile b/config/Makefile
index da12838..67e7998 100644
--- a/config/Makefile
+++ b/config/Makefile
@@ -7,9 +7,11 @@ include $(top_builddir)/src/Makefile.global
 
 install: all installdirs
$(INSTALL_SCRIPT) $(srcdir)/install-sh 
'$(DESTDIR)$(pgxsdir)/config/install-sh'
+   $(INSTALL_SCRIPT) $(srcdir)/missing 
'$(DESTDIR)$(pgxsdir)/config/missing'
 
 installdirs:
$(MKDIR_P) '$(DESTDIR)$(pgxsdir)/config'
 
 uninstall:
rm -f '$(DESTDIR)$(pgxsdir)/config/install-sh'
+   rm -f '$(DESTDIR)$(pgxsdir)/config/missing'

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [DOCS] max_worker_processes on the standby

2015-10-30 Thread Robert Haas
On Thu, Oct 29, 2015 at 5:41 PM, Fujii Masao  wrote:
> I found another strange behavior on track_commit_timestamp.
> Here are the steps to reproduce it.
>
> 1. Start the master and standby servers with track_commit_timestamp enabled.
> Since committs is activated in standby, pg_last_committed_xact() can
> successfully return the timestamp of last transaction as expected.
>
> 2. Disable track_commit_timestamp in the master and restart the master server.
> The parameter-change WAL record is streamed to the standby and committs
> is deactivated. pg_last_committed_xact() causes an ERROR in the standby.
>
> 3. Run checkpoint in the master.
>
> 4. Run restartpoint in the standby after the checkpoint WAL record generated
> in #3 is replicated to the standby.
>
> 5. Restart the standby server.
> Committs is activated in the standby because track_commit_timestamp is
> enabled.

This seems wrong already at this point.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [ADMIN] Replication slots and isolation levels

2015-10-30 Thread Robert Haas
On Thu, Oct 29, 2015 at 3:29 PM, Oleksii Kliukin  wrote:
> Could it be a consequence of how REPEATABLE READ transactions handle
> snapshots? With REPEATABLE READ the snapshot is acquired only once at the
> beginning of a transaction; a READ COMMITTED transaction re-evaluates its
> snapshot with each new command.

I bet that's exactly it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers