Re: [HACKERS] IF (NOT) EXISTS in psql-completion

2016-02-25 Thread Kyotaro HORIGUCHI
Hello, this is the new version of this patch.

At Fri, 26 Feb 2016 13:17:26 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
 wrote in 
<20160226.131726.54224488.horiguchi.kyot...@lab.ntt.co.jp>
> Hello, thank you for the comments.
> 
> At Mon, 15 Feb 2016 15:43:57 +0300, Artur Zakirov  
> wrote in <56c1c80d.7020...@postgrespro.ru>
> > I did some tests with your patch. But I am not confident in
> > tab-complete.c.
> > 
> > And I have some notes:
> > 
> > 1 - I execute git apply command and get the following warning:
> > 
> > ../0001-Suggest-IF-NOT-EXISTS-for-tab-completion-of-psql.patch:302:
> > trailing whitespace.
> > /*
> > warning: 1 line adds whitespace errors.
> > 
> > This is because of superfluous whitespace I think.
> 
> Oops. I'll remove it.
> 
> > 2 - In psql I write "create table if" and press . psql adds the
> > following:
> > 
> > create table IF NOT EXISTS
> > 
> > I think psql should continue with lower case if user wrote query with
> > loser case text:
> 
> Good catch! Hmm. COMPLETE_WITH_SCHEMA_QUERY() does it. For
> example, the following existing completion behaves in the same
> way.
> 
> "drop index c" ==> "drop index CONCURRENTLY"
> 
> The results of schema queries should be treated in case-sensitive
> way so the additional keywords by 'UNION' are also treated
> case-sensitively.
> 
> COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_indexes,
>" UNION SELECT 'CONCURRENTLY'");
> 
> Fixing this is another problem. So I'd like to leave this alone
> here.
> 
> > create table if not exists
> > 
> > 3 - Same with "IF EXISTS". If a write "alter view if" and press 
> > psql writes:
> > 
> > alter view IF EXISTS

Finally, the only thing done in this new patch is removing one
dangling space.

reagrds,


-- 
Kyotaro Horiguchi
NTT Open Source Software Center
>From f9610a87d73629ac3fcc25fcf8e667c5ca3e921d Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi 
Date: Fri, 5 Feb 2016 16:50:35 +0900
Subject: [PATCH] Suggest IF (NOT) EXISTS for tab-completion of psql

This patch lets psql to suggest "IF (NOT) EXISTS". Addition to that,
since this patch introduces some mechanism for syntactical robustness,
it allows psql completion to omit some optional part on matching.
---
 src/bin/psql/tab-complete.c | 559 
 1 file changed, 463 insertions(+), 96 deletions(-)

diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 5f27120..d3eab8c 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -656,6 +656,10 @@ static const SchemaQuery Query_for_list_of_matviews = {
 "   FROM pg_catalog.pg_roles "\
 "  WHERE substring(pg_catalog.quote_ident(rolname),1,%d)='%s'"
 
+#define Query_for_list_of_rules \
+"SELECT pg_catalog.quote_ident(rulename) FROM pg_catalog.pg_rules "\
+" WHERE substring(pg_catalog.quote_ident(rulename),1,%d)='%s'"
+
 #define Query_for_list_of_grant_roles \
 " SELECT pg_catalog.quote_ident(rolname) "\
 "   FROM pg_catalog.pg_roles "\
@@ -763,6 +767,11 @@ static const SchemaQuery Query_for_list_of_matviews = {
 "SELECT pg_catalog.quote_ident(tmplname) FROM pg_catalog.pg_ts_template "\
 " WHERE substring(pg_catalog.quote_ident(tmplname),1,%d)='%s'"
 
+#define Query_for_list_of_triggers \
+"SELECT pg_catalog.quote_ident(tgname) FROM pg_catalog.pg_trigger "\
+" WHERE substring(pg_catalog.quote_ident(tgname),1,%d)='%s' AND "\
+"   NOT tgisinternal"
+
 #define Query_for_list_of_fdws \
 " SELECT pg_catalog.quote_ident(fdwname) "\
 "   FROM pg_catalog.pg_foreign_data_wrapper "\
@@ -906,7 +915,7 @@ static const pgsql_thing_t words_after_create[] = {
 	{"PARSER", Query_for_list_of_ts_parsers, NULL, THING_NO_SHOW},
 	{"POLICY", NULL, NULL},
 	{"ROLE", Query_for_list_of_roles},
-	{"RULE", "SELECT pg_catalog.quote_ident(rulename) FROM pg_catalog.pg_rules WHERE substring(pg_catalog.quote_ident(rulename),1,%d)='%s'"},
+	{"RULE", Query_for_list_of_rules},
 	{"SCHEMA", Query_for_list_of_schemas},
 	{"SEQUENCE", NULL, _for_list_of_sequences},
 	{"SERVER", Query_for_list_of_servers},
@@ -915,7 +924,7 @@ static const pgsql_thing_t words_after_create[] = {
 	{"TEMP", NULL, NULL, THING_NO_DROP},		/* for CREATE TEMP TABLE ... */
 	{"TEMPLATE", Query_for_list_of_ts_templates, NULL, THING_NO_SHOW},
 	{"TEXT SEARCH", NULL, NULL},
-	{"TRIGGER", "SELECT pg_catalog.quote_ident(tgname) FROM pg_catalog.pg_trigger WHERE substring(pg_catalog.quote_ident(tgname),1,%d)='%s' AND NOT tgisinternal"},
+	{"TRIGGER", Query_for_list_of_triggers},
 	{"TYPE", NULL, _for_list_of_datatypes},
 	{"UNIQUE", NULL, NULL, THING_NO_DROP},		/* for CREATE UNIQUE INDEX ... */
 	{"UNLOGGED", NULL, NULL, THING_NO_DROP},	/* for CREATE UNLOGGED TABLE
@@ -944,6 +953,7 @@ static char **complete_from_variables(const char *text,
 	const char *prefix, const char *suffix, bool need_value);
 static char *complete_from_files(const char *text, int state);
 

Re: [HACKERS] proposal: PL/Pythonu - function ereport

2016-02-25 Thread Pavel Stehule
2016-02-25 7:06 GMT+01:00 Catalin Iacob :

> On Fri, Feb 19, 2016 at 9:41 PM, Pavel Stehule 
> wrote:
> >  It looks like good idea. Last version are not breaking compatibility -
> and
> > I think so it can works.
> >
> > I wrote the code, that works on Python2 and Python3
>
> Hi,
>
> I've attached a patch on top of yours with some documentation
> improvements, feel free to fold it in your next version.
>

merged


>
> I think the concept is good. We're down to code level changes. Most of
> them are cosmetical, misleading comments and so on but there are some
> bugs in there as far as I see.
>
>
> In object_to_string you don't need to test for Py_None. PyObject_Str
> will return NULL on failure and whatever str() returns on the
> underlying object. No need to special case None.
>
> In object_to_string, when you're already in a (so != NULL) block, you
> can use Py_DECREF instead of XDECREF.
>

fixed


>
> object_to_string seems buggy to me: it returns the pointer returned by
> PyString_AsString which points to the internal buffer of the Python
> object but it also XDECREFs that object. You seem to be returning a
> pointer to freed space.
>

fixed


>
> get_string_attr seems to have the same issue as object_to_string.
>

use pstrdup, but the test on Py_None is necessary


PyObject_GetAttrString can returns Py_None, and 3.4's PyString_AsString
produce a error "ERROR:  could not convert Python Unicode object to bytes"
when object is None.

So minimally in "get_string_attr" the test on Py_None is necessary


>
> In PLy_get_error_data query and position will never be set for
> plpy.Error since you don't offer them to Python and therefore don't
> set them in PLy_output_kw. So I think you should remove the code
> related to them, including the char **query, int *position parameters
> for PLy_get_error_data. Removing position also allows removing
> get_int_attr and the need to exercise this function in the tests.
>

has sense, removed


>
> You're using PLy_get_spi_sqlerrcode in PLy_get_error_data which makes
> the spi in the name unsuitable. I would rename it to just
> PLy_get_sqlerrcode. At least the comment at the top of
> PLy_get_spi_sqlerrcode needs to change since it now also extracts info
> from Error not just SPIError.
>

renamed


>
> /* set value of string pointer on object field */ comments are weird
> for a function that gets a value. But those functions need to change
> anyway since they're buggy (see above).
>

fixed


>
> The only change in plpy_plpymodule.h is the removal of an empty line
> unrelated to this patch, probably from previous versions. You should
> undo it to leave plpy_plpymodule.h untouched.
>

fixed

>
> Why rename PLy_output to PLy_output_kw? It only makes the patch bigger
> without being an improvement. Maybe you also have this from previous
> versions.
>

renamed to PLy_output only


>
> In PLy_output_kw you don't need to check message for NULL, just as sv
> wasn't checked before.
>

It is not necessary, but I am thinking so it better - it is maybe too
defensive - but the message can be taken from more sources than in old
code, and one check on NULL is correct


> In PLy_output_kw you added a FreeErrorData(edata) which didn't exist
> before. I'm not familiar with that API but I'm wondering if it's
> needed or not, good to have it or not etc.
>

The previous code used Python memory for message. Now, I used PostgreSQL
memory (via pstrdup), so now I have to call FreeErrorData.



>
> In PLy_output_kw you didn't update the "Note: If sv came from
> PyString_AsString(), it points into storage..." comment which still
> refers to sv which is now deleted.
>

I moved Py_XDECREF(so) up, and removed comment


>
> In PLy_output_kw you removed the "return a legal object so the
> interpreter will continue on its merry way" comment which might not be
> the most valuable comment in the world but removing it is still
> unrelated to this patch.
>

returned


>
> In PLy_output_kw you test for kw != NULL && kw != Py_None. The kw !=
> NULL is definitely needed but I'm quite sure Python will never pass
> Py_None into it so the != Py_None isn't needed. Can't find a reference
> to prove this at the moment.
>

cleaned

>
>
> Some more tests could be added to exercise more parts of the patch:
> - check that SPIError is enhanced with all the new things:
> schema_name, table_name etc.
> - in the plpy.error call use every keyword argument not just detail
> and hint: it proves you save and restore every field correctly from
> the Error fields since that's not exercised by the info call above
> which does use every argument
>



> - use a wrong keyword argument to see it gets rejected with you error
> message
> - try some other types than str (like detail=42) for error as well
> since error goes on another code path than info
> - a test exercising the "invalid sqlstate" code
>

done

Sending updated version

Regards

Pavel
diff --git a/doc/src/sgml/plpython.sgml 

Re: [HACKERS] PATCH: index-only scans with partial indexes

2016-02-25 Thread Kyotaro HORIGUCHI
I marked this as "ready for commiter" and tried to add me as the
*second* author. But the CF app forces certain msyterious order
for listed names. Is there any means to arrange the author names
in desired order?

At Fri, 26 Feb 2016 16:06:37 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
 wrote in 
<20160226.160637.65689630.horiguchi.kyot...@lab.ntt.co.jp>
> Hello, This is a (maybe) committer-ready patch of a Tomas
> Vondra's project.
> 
> This patch applies on the current master and passes make check.
> 
> This is to exclude some base-estrict clauses that are implied by
> index predicates on index scans on partial indexes.
> 
> First, this patch adds a new member indexrinfos to IndexOptInfo,
> which usually has the same value with baserestrictinfo of the
> parent RelOptInfo and cost_index() uses this instead of
> RelOptInfo.baserestrictinfo.
> 
> For partial indexes, the clauses implied by the index predicates
> are removed from the indexrinfos, so that the result plan for
> scans on such indexes won't contain such restrictions. Such
> restrictions commonly become filter quals that are nothing but a
> useless work to do, so this will improve the performance of some
> index scans on partial indexes.
> 
> The largest part of the extra cost of the additional work would
> be the cost of predicate_implied_by() on all clauses of
> baserectrictinfo and indpred of every IndexOptInfos. The extra
> work is done in check_partial_indexes() for all index scans on
> partial indexes.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning

2016-02-25 Thread Amit Langote

Hi,

Thanks for your feedback.

On 2016/02/26 0:43, Jean-Pierre Pelletier wrote:
> Why not based it on "Exclusion Constraint" ?
> 
> Most discussions as of late seems to focus on Range overlaps which appeal
> (I would think) is that it supports both "equality" and "overlaps", two
> popular partitioning schemes.
> 
> "Equality" as in "value1 = value2" can be implemented with "range
> overlaps"
> as "range(value1,value) = range(value,value2)".
> 
> I would think that Partitioning schemes can be Declarative, Efficient and
> not restricted to Equality and Overlaps as long as all partitions (of a
> partitioned table) are using a single partitioning definition expressed
> as:
> - An Immutable Expression on tuple columns, in the simplest case a single
> column
> - An Operator, in the simplest case, "equality"
> 
> That seems very close to the semantic of "Constraint Exclusion" as
> described here:
> http://thoughts.davisjeff.com/2010/09/25/exclusion-constraints-are-general
> ized-sql-unique/
> 
> If partitioning could be based on EC, it would bring these additional
> benefits:
> - The choice of operator as long as it is boolean. commutative and
> Indexable
> - The use of Expression/Function and not just bare columns

Note that proposed patch does more or less what you say we should be doing
minus the "exclusion constraint" part.

With the proposed, you can specify an expression(s)/column(s) as partition
key along with an "operator class" for the column like below:

CREATE TABLE foo (a type_name) PARTITION BY LIST (a USING opclass_name);
CREATE TABLE bar (a type_name) PARTITION BY RANGE (a USING opclass_name);

Right now, supported partition strategies are RANGE and LIST where "btree
operators" suffice.  So in the above example, type_name must have a
suitable btree operators class defined in the system which could be
opclass_name.  If opclass_name was created as the default for type_name,
one need not write USING opclass_name.

Then when you create a partition of foo:

CREATE TABLE foo_partition PARTITION OF foo FOR VALUES IN (val1, val2);

system enforces that foo_partition only contains values such that:

  a = ANY ( ARRAY [val1, val2] ),

where the operator "=" refers to an operator belonging to the operator
class opclass_name (hence can have a user-defined notion of "equality").

And when you create a partition of bar:

CREATE TABLE bar_partition PARTITION OF bar FOR VALUES [val1, val2);

system enforces that bar_partition only contains values such that:

  val1 <= a < val2,

where operators "<=" and "<" refer to the operators belonging to the
operator class opclass_name (hence can have a user-defined notion of
ordering).

Further, system can also optimize queries based on its knowledge of
operators appearing in query clauses and implicit constraints just
mentioned above. Specifically, it can can exclude partitions using
"constraint exclusion" which is NOT the same thing as "exclusion
constraints", as you might be aware.

"Exclusion constraints" depend on having suitable a index (just like
unique constraint enforcing btree index) that uses the specified operator
to enforce the constraint:

postgres=# CREATE TABLE circles (
postgres(# c circle,
postgres(# EXCLUDE USING gist (c WITH &&)
postgres(# );
CREATE TABLE

postgres=# \d+ circles
   Table "public.circles"
 Column |  Type  | Modifiers | Storage | Stats target | Description
 ---++---+-+--+-
 c  | circle |   | plain   |  |
Indexes:
"circles_c_excl" EXCLUDE USING gist (c WITH &&)

The talk of "non-overlapping partitions" in this thread refers to the
invariant that partition DDL should maintain which uses ad-hoc logic to do
that but which is based on the semantics of the specified operators.

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] In-core regression tests for replication, cascading, archiving, PITR, etc.

2016-02-25 Thread Craig Ringer
On 26 February 2016 at 13:43, Michael Paquier 
wrote:


> > my $caughtup_query = "SELECT '$current_lsn'::pg_lsn <=
> > pg_last_xlog_replay_location()";
> > use
> > my $caughtup_query = "SELECT pg_xlog_location_diff('$current_lsn',
> > pg_last_xlog_replay_location()) <= 0";
> > so it doesn't care if we replay past the expected LSN on the master due
> to
> > autovacuum activity. That's what's done in the real world and what
> should be
> > covered by the tests IMO.
>
> Those two statements have the same meaning. pg_xlog_location_diff does
> exactly the same thing as the pg_lsn data type in terms of LSN
> comparisons.


Ah. Whoops. I meant to write '=' in the first, to reflect what the code
does.

You're quite right that casting to pg_lsn has the same effect and looks
cleaner.

I think this looks good as of the last version. I'm not keen on disabling
autovacuum but that can be addressed in a followup that makes it easier to
configure params, as discussed. I definitely don't want to complicate this
patch with it.

Should be committed ASAP IMO.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] PATCH: index-only scans with partial indexes

2016-02-25 Thread Kyotaro HORIGUCHI
Hello, This is a (maybe) committer-ready patch of a Tomas
Vondra's project.

This patch applies on the current master and passes make check.

This is to exclude some base-estrict clauses that are implied by
index predicates on index scans on partial indexes.

First, this patch adds a new member indexrinfos to IndexOptInfo,
which usually has the same value with baserestrictinfo of the
parent RelOptInfo and cost_index() uses this instead of
RelOptInfo.baserestrictinfo.

For partial indexes, the clauses implied by the index predicates
are removed from the indexrinfos, so that the result plan for
scans on such indexes won't contain such restrictions. Such
restrictions commonly become filter quals that are nothing but a
useless work to do, so this will improve the performance of some
index scans on partial indexes.

The largest part of the extra cost of the additional work would
be the cost of predicate_implied_by() on all clauses of
baserectrictinfo and indpred of every IndexOptInfos. The extra
work is done in check_partial_indexes() for all index scans on
partial indexes.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
>From 97040f596e34d40f1e514c8385e0877f00d858ca Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi 
Date: Thu, 25 Feb 2016 18:49:36 +0900
Subject: [PATCH] Remove restrictinfo clauses that are implied by index
 predicates.

Some clauses in baserestrictinfo are useless when they are implied by
index predicates of a paritial index currently focused. Removing such
clauses improves the performance of index scans.
---
 src/backend/optimizer/path/costsize.c|  5 +-
 src/backend/optimizer/path/indxpath.c| 90 ++--
 src/backend/optimizer/util/pathnode.c|  1 +
 src/include/nodes/relation.h |  7 +++
 src/test/regress/expected/aggregates.out |  8 +--
 5 files changed, 87 insertions(+), 24 deletions(-)

diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c
index 5fc2f9c..36678e0 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -89,6 +89,7 @@
 #include "optimizer/placeholder.h"
 #include "optimizer/plancat.h"
 #include "optimizer/planmain.h"
+#include "optimizer/predtest.h"
 #include "optimizer/restrictinfo.h"
 #include "parser/parsetree.h"
 #include "utils/lsyscache.h"
@@ -433,7 +434,7 @@ cost_index(IndexPath *path, PlannerInfo *root, double loop_count)
 		path->path.rows = path->path.param_info->ppi_rows;
 		/* qpquals come from the rel's restriction clauses and ppi_clauses */
 		qpquals = list_concat(
-	   extract_nonindex_conditions(baserel->baserestrictinfo,
+	   extract_nonindex_conditions(path->indexrinfos,
    path->indexquals),
 			  extract_nonindex_conditions(path->path.param_info->ppi_clauses,
 		  path->indexquals));
@@ -442,7 +443,7 @@ cost_index(IndexPath *path, PlannerInfo *root, double loop_count)
 	{
 		path->path.rows = baserel->rows;
 		/* qpquals come from just the rel's restriction clauses */
-		qpquals = extract_nonindex_conditions(baserel->baserestrictinfo,
+		qpquals = extract_nonindex_conditions(path->indexrinfos,
 			  path->indexquals);
 	}
 
diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c
index ddb4ca5..577e7a6 100644
--- a/src/backend/optimizer/path/indxpath.c
+++ b/src/backend/optimizer/path/indxpath.c
@@ -136,8 +136,7 @@ static double adjust_rowcount_for_semijoins(PlannerInfo *root,
 			  Index outer_relid,
 			  double rowcount);
 static double approximate_joinrel_size(PlannerInfo *root, Relids relids);
-static void match_restriction_clauses_to_index(RelOptInfo *rel,
-   IndexOptInfo *index,
+static void match_restriction_clauses_to_index(IndexOptInfo *index,
    IndexClauseSet *clauseset);
 static void match_join_clauses_to_index(PlannerInfo *root,
 			RelOptInfo *rel, IndexOptInfo *index,
@@ -266,7 +265,7 @@ create_index_paths(PlannerInfo *root, RelOptInfo *rel)
 		 * Identify the restriction clauses that can match the index.
 		 */
 		MemSet(, 0, sizeof(rclauseset));
-		match_restriction_clauses_to_index(rel, index, );
+		match_restriction_clauses_to_index(index, );
 
 		/*
 		 * Build index paths from the restriction clauses.  These will be
@@ -1801,13 +1800,13 @@ check_index_only(RelOptInfo *rel, IndexOptInfo *index)
 	 * Check that all needed attributes of the relation are available from the
 	 * index.
 	 *
-	 * XXX this is overly conservative for partial indexes, since we will
-	 * consider attributes involved in the index predicate as required even
-	 * though the predicate won't need to be checked at runtime.  (The same is
-	 * true for attributes used only in index quals, if we are certain that
-	 * the index is not lossy.)  However, it would be quite expensive to
-	 * determine that accurately at this point, so for now we take the easy
-	 * way out.
+	 * For partial indexes we 

Re: [HACKERS] [PATCH v5] GSSAPI encryption support

2016-02-25 Thread Michael Paquier
On Fri, Feb 26, 2016 at 5:02 AM, Robbie Harwood  wrote:
> Michael Paquier  writes:
>> We need to be careful here, backward-compatibility is critical for
>> both the client and the server, or to put it in other words, an
>> uptables client should still be able to connect a patched server, and
>> vice-versa. This is an area where it is important to not break any
>> third-part tool, either using libpq or reimplementing the frontend
>> protocol.
>
> Which is why in my introduction to the v4 patch I explicitly mentioned
> that this was missing.  I wanted to talk about how this should be
> implemented, since I feel that I received no feedback on that design
> From last time.  It's not hard to port that design over, if desired.

I gave my opinion about the parametrization here a couple of months
back, and thought that it was rather a neat design. I still think so:
http://www.postgresql.org/message-id/cab7npqrgq60pwlb-cllrvmoqwpabnu-961w+xgpvg62rmsk...@mail.gmail.com
In short:
1) Introduction of new pg_hba parameter require_encrypt, defaulting to
off, enforcing the clients to have encryption. This way an
administrator of a new server can prevent connections of old clients
if he/she wants to have all the connections excrypted.
2) Client-side parameter, that you named previously gss_encrypt, to
let a client decide if he wishes to do encryption or not.

>> So I finally began to dive into your new patch... And after testing
>> this is breaking the authentication protocol for GSS. I have been able
>> to connect to a server once, but at the second attempt and after
>> connection is failing with the following error:
>> psql: expected authentication request from server, but received ioltas
>
> Interesting.  I will see if I can find anything.  The capture posted
> earlier (thanks David!) suggests that the client doesn't expect the
> encrypted data.
>
> I suspect what has happened is a race between the client buffering data
> From the socket and the client processing the completion of GSSAPI
> authentication.  This kind of issue is why my v1 handled GSSAPI at the
> protocol level, not at the transport level.  I think we end up with
> encrypted data in the buffer that's supposed to be decrypted, and since
> the GSSAPI blob starts with \x00, it doesn't know what to do.
>
> I'll cut a v6 with most of the changes we've talked about here.  It
> should address this issue, but I suspect that no one will be happy about
> how, since the client essentially needs to "un-read" some data.

Let's be sure that we come out with something rock-solid here, the
code paths taken for authentication do not require an authenticated
user, so any bugs introduced could have dangerous consequences for the
backend.

> As a side note, this would also explain why I can't reproduce the issue,
> since I'm running in very low-latency environments (three VMs on my
> laptop).

I'm doing my tests in single VM, with both krb5kdc and Postgres
running together.

>> Also, something that is missing is the parametrization that has been
>> discussed the last time this patch was on the mailing list. Where is
>> the capacity to control if a client is connecting to a server that is
>> performing encryption and downgrade to non-ecrypted connection should
>> the server not be able to support it? Enforcing a client to require
>> encryption support using pg_hba.conf was as well a good thing. Some
>> more infrastructure is needed here, I thought that there was an
>> agreement previously regarding that.
>
> This does not match my impression of the discussion, but I would be
> happy to be wrong about that since it is less work for me.

OK, good to know. I had the opposite impression actually :) See above.

>> Also, and to make the review a bit easier, it would be good to split
>> the patch into smaller pieces (thanks for waiting on my input here,
>> this became quite clear after looking at this code). Basically,
>> pg_GSS_error is moved to a new file, bringing with it pg_GSS_recvauth
>> because the error routine is being used by the new ones you are
>> introducing: be_gssapi_write, etc. The split that this patch is doing
>> is a bit confusing, all the GSS-related stuff is put within one single
>> file:
>> - read/write routines
>> - authentication routine
>> - constants
>> - routines for error handling
>> Mixing read/write routines with the authentication routine looks
>> wrong, because the read-write routines just need to create a
>> dependency with for example be-secure.c on the backend. In short,
>> where before authentication and secure read/writes after
>> authentication get linked to each other, and create a dependency that
>> did not exist before.
>>
>> For the sake of clarity I would suggest the following split:
>> - be-gss-common.c, where all the constants and the error handling
>> routine are located.
>> - Let authrecv in auth.c
>> - Move only the read/write routines to the new file be-[secure-]gssapi.c
>> Splitting 

Re: [HACKERS] insufficient qualification of some objects in dump files

2016-02-25 Thread Tom Lane
Michael Paquier  writes:
> On Fri, Feb 26, 2016 at 9:47 AM, Peter Eisentraut  wrote:
>> Tom thought this might require an archive version dump, but I'm not
>> sure.  The tags are more of an informational string for human
>> consumption, not strictly part of the archive format.

> Hm, the TOC entry, with its tag changed, is part of the dump, and this
> is written in the archive, but the shape of TocEntry does not change
> so this is really debatable.

I had in mind that we would add a separate field for tag's schema name to
TocEntry, which surely would require an archive format number bump.
As the patch is presented, I agree with Peter that it does not really
need a format number bump.  The question that has to be answered is
whether this solution is good enough?  You could not trust it for
automated processing of tags --- it's easy to think of cases in which the
schema/object name separation would be ambiguous.  So is the tag really
"strictly for human consumption"?  I'm not sure about that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FDW handling count(*) through AnalyzeForeignTable or other constant time push-down

2016-02-25 Thread Tom Lane
"Gabe F. Rudy"  writes:
> Is there any way to convince Postgres FDW to leverage the analyze row counts 
> or even the "double* totalRowCount" returned from the AcquireSampleRows 
> callback from my AnalyzeForeignTable function so that it does not do a 
> full-table scan for a COUNT(*) etc?

No.  In PG's view, ANALYZE-based row counts are imprecise by definition.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] insufficient qualification of some objects in dump files

2016-02-25 Thread Michael Paquier
On Fri, Feb 26, 2016 at 9:47 AM, Peter Eisentraut  wrote:
> On 1/29/16 1:24 AM, Michael Paquier wrote:
>>> I think we should amend the archive tag for these kinds of objects to
>>> > include the table name, so it might look like:
>>> >
>>> > 2153; 2604 39696 DEFAULT public test a rolename
>>> >
>>> > Comments?
>> +1. I noticed that this limitation is present for triggers (as you
>> mentioned), constraints, fk constraints and RLS policies which should
>> be completed with a table name.
>
> Thank you for collecting this list.  Attached is a patch for this.

Visibly rules are on the stack as well, I forgot to mention them, and
your updated patch does the job correctly.

> Tom thought this might require an archive version dump, but I'm not
> sure.  The tags are more of an informational string for human
> consumption, not strictly part of the archive format.

Hm, the TOC entry, with its tag changed, is part of the dump, and this
is written in the archive, but the shape of TocEntry does not change
so this is really debatable. It is true that pg_restore -L is able to
work even if the tag output is changed, still now that I think about
that a version bump would be preferrable: what is generated by the
bump is changed after all. The previous version upgrades that are
K_VERS_1_11 or K_VERS_1_6 working on TOC did add new fields in the
structure TocEntry and influenced pg_restore.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning

2016-02-25 Thread Amit Langote
On 2016/02/23 22:51, Robert Haas wrote:
> On Thu, Feb 18, 2016 at 11:11 AM, Amit Langote wrote:
>> Some might think that writing potentially the same PARTITION BY clause 100
>> times for 100 level-1 partitions could be cumbersome. That is what
>> SUBPARTITION BY notation may be good as a shorthand for.
> 
> I think that anybody who is doing that much partitioning is going to
> use a tool to generate the DDL anyway, so it doesn't really matter.
>>> I think if you've got SUBPARTITION as a keyword in the syntax
>>> anywhere, you're doing it wrong.  The toplevel object shouldn't really
>>> care whether its children are themselves partitioned or not.
>>
>> This is fine for the internals. SUBPARTITION BY is mere syntax sugar. It
>> might as well be just cascaded PARTITION BYs. The point is to specify as
>> many of those with CREATE TABLE toplevel as the number of levels of
>> partitioning we want. That does however fix the number of levels in advance.
> 
> Which I think is not good.  If we say that a table can be partitioned,
> and a table that is a partition can also be partitioned, we've got a
> nice general system.  Fixing the number of partitioning levels for the
> sake of a little syntactic sugar is, IMHO, getting our priorities
> backwards.

OK.

To reiterate the syntax:

CREATE TABLE parent(...) PARTITION BY

CREATE TABLE partition
  PARTITION OF parent FOR VALUES ... [ PARTITION BY ]

ALTER TABLE partitioned
  ATTACH PARTITION name FOR VALUES ...
  USING TABLE source_table [ NO VALIDATE ]

ALTER TABLE partitioned
  DETACH PARTITION name [ WITH new_name ]

A note about NO VALIDATE in ATTACH PARTITION:

If specified, it means user is telling the system to "trust" that none of
the rows contained in the source table lie outside partition boundary
specification (the FOR VALUES clause).  Which is not the same thing as
adding a NOT VALID check constraint because the check constraint is
assumed invalid by the optimizer until explicitly validated by VALIDATE
CONSTRAINT.  The default behavior is to validate by scanning the source
table to check for violating rows and fail adding the partition, if any.
Because adding a partition requires an exclusive lock on the parent, the
default behavior may cause us to have the lock for long durations which
may be undesirable.  Should do better than that?

>> While we are on the syntax story, how about FOR VALUES syntax for range
>> partitions (sorry for piggybacking it here in this message). At least some
>> people dislike LESS THAN notation. Corey Huinker says we should be using
>> range type literals for that. It's not clear though that using range type
>> literals directly is a way ahead. For one, range type API expects there to
>> exist a range type with given element type. Whereas all we require for
>> range partitioning proper is for column type to have a btree operator
>> class. Should we require it to have an associated range type as well?
>> Don't think that there exists an API to check that either. All in all,
>> range types are good to implement things in applications but not so much
>> within the backend (unless I'm missing something). I know reinventing the
>> wheel is disliked as well but perhaps we could offer something like the
>> following because Corey offered some examples which would help from the
>> flexibility:
>>
>> START [ EXCL ] (startval) END [ INCL ] (endval)
> 
> I don't think using range type literals is gonna work.  There's no
> guarantee that the necessary range types exist.  However, we could
> possibly use a syntax inspired by the syntax for range types.  I'm a
> little nervous about asking people to type commands with mismatching
> braces:
> 
> CREATE TABLE partition_name PARTITION OF parent_name FOR VALUES [ 1, 10 );
> 
> ...but maybe it's a good idea.  It certainly has the advantage of
> being more compact than a lot of there ways we might choose to write
> that.  And I think LESS THAN sucks.  It's just clunky and awkward
> syntax.

Slightly concerned about multi-column range partition key but as suggested
by Corey, we can use record-like notation.

CREATE TABLE foo(c1 char(2), c2 char(2)) PARTITION BY RANGE (c1, c2);

CREATE TABLE foo_ax1x PARTITION OF foo FOR VALUES [ ('a', 1), ('a', 2) );
CREATE TABLE foo_ax2x PARTITION OF foo FOR VALUES [ ('a', 2), ('a', 3) );
CREATE TABLE foo_ax3x PARTITION OF foo FOR VALUES [ ('a', 3), ('a', 4) );

CREATE TABLE foo_bx1x PARTITION OF foo FOR VALUES [ ('b', 1), ('b', 2) );
CREATE TABLE foo_bx2x PARTITION OF foo FOR VALUES [ ('b', 2), ('b', 3) );
CREATE TABLE foo_bx3x PARTITION OF foo FOR VALUES [ ('b', 3), ('b', 4) );

>> I see the trade-off. I agree that considering the significance for attach
>> partition case is quite important.
>>
>> So, the tuple routing code should be ready to use projection if there
>> happens to be a partition with differing tuple descriptor. In the code I
>> posted, a ResultRelInfo is lazily built afresh for each inserted tuple in
>> ExecInsert's case and for each tuple where 

Re: [HACKERS] PATCH: index-only scans with partial indexes

2016-02-25 Thread Kyotaro HORIGUCHI
Hi,

At Thu, 25 Feb 2016 12:22:45 +0100, Tomas Vondra  
wrote in <56cee405.30...@2ndquadrant.com>
> >> Attached is a v6 of the patch, which is actually the version
> >> submitted by Kyotaro-san on 2015/10/8 rebased to current master and
> >> with two additional changes.
> >
> > This relies on the fact I believe that no indexrelinfos are
> > shared among any two baserels. Are you OK with that?
> 
> I'm not sure what this refers to? You mean the fact that an index will
> have one IndexOptInfo instance for each baserel? Yes, that seems fine
> to me.

Yes that what I meant.

> >> Firstly, I've removed the "break" from the initial foreach loop in
> >> check_partial_indexes(). As explained in the previous message, I
> >> believe this was a bug in the patch.
> >
> > I agreed. The break is surely a bug. (the regression didn't find
> > it though..)
> >
> >> Secondly, I've tried to improve the comments to explain a bit better
> >> what the code is doing.
> >
> > In check_partial_indexes, the following commnet.
> >
> >>/*
> >> * Update restrictinfo for this index by excluding clauses that
> >> * are implied by the index predicates. We first quickly check if
> >> * there are any implied clauses, and if we found at least one
> >> * we do the actual work. This way we don't need the construct
> >> * a copy of the list unless actually needed.
> >> */
> >
> > Is "need the construct" a mistake of "need to construct" ?
> >
> >
> > match_restriction_clauses_to_index is called only from
> > create_index_paths, and now the former doesn't use its first
> > parameter rel. We can safely remove the useless parameter.
> >
> > -  match_restriction_clauses_to_index(RelOptInfo *rel, IndexOptInfo
> > -  *index,
> > - IndexClauseSet *clauseset)
> > +  match_restriction_clauses_to_index(IndexOptInfo *index,
> > + IndexClauseSet *clauseset)
> >
> > I find no other problem and have no objection to this. May I mark
> > this "Ready for committer" after fixing them?
> 
> OK. Do you want me to do the fixes, or will you do that?

Ah. I will do. Please wait a minute.

regares,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] In-core regression tests for replication, cascading, archiving, PITR, etc.

2016-02-25 Thread Michael Paquier
On Fri, Feb 26, 2016 at 1:47 PM, Craig Ringer  wrote:
> On 26 February 2016 at 10:58, Michael Paquier 
> wrote:
>> Here is a rebased set after the conflicts created by e640093, with the
>> following changes:
>
> Thanks for rebasing on top of that. Not totally fair when your patch came
> first, but I guess it was simpler to merge the other one first.

At this point the final result is the same. It does not matter what
gets in first.

> I do have one major disagreement, which is that you turn autovacuum off if
> streaming is enabled. This is IMO completely wrong and must be removed. It's
> making the tests ignore a major and important part of real-world use.

This has been chosen for consistency with what is in pg_rewind tests,
the idea being to keep the runs more stable with a WAL output under
control to allow predictable results. Though I do not see any direct
reason to not remove it actually now that I think about it.

> If you did it to make it easier to test replay catchup etc, just use
> pg_xlog_location_diff instead of an equality test. Instead of:
> my $caughtup_query = "SELECT '$current_lsn'::pg_lsn <=
> pg_last_xlog_replay_location()";
> use
> my $caughtup_query = "SELECT pg_xlog_location_diff('$current_lsn',
> pg_last_xlog_replay_location()) <= 0";
> so it doesn't care if we replay past the expected LSN on the master due to
> autovacuum activity. That's what's done in the real world and what should be
> covered by the tests IMO.

Those two statements have the same meaning. pg_xlog_location_diff does
exactly the same thing as the pg_lsn data type in terms of LSN
comparisons. Choosing one or the other is really a matter of taste.
Though I see that 001 is the only test that uses an equality, this
should not be the case I agree.

> The patch sets
>
> tap_tests => 1,
>
> in config_default.pl. Was that on purpose? I'd have no problem with running
> the TAP tests by default if they worked by default, but the docs say that at
> least with ActiveState's Perl you have to jump through some hoops to get
> IPC::Run.

No, this was an error in the previous version of the patch 0003. Those
tests should be disabled by default, to match what ./configure does,
and also because installing IPC::Run requires some extra operations,
but that's easily doable with a bit of black magic.

> Typo in PostgresNode.pm: passiong should be 'passing'.

Oops.

> I'd like a way to append parameters in a way that won't clobber settings
> made implicitly by the module through things like enable_streaming but I can
> add that in a followup patch. It doesn't need to complicate this one.

This is something that I have been thinking about for some time while
hacking this thing, but I finished with the current version to not
complicate the patch more than it needs to be, and because the current
version is enough for the needs of all the tests present. Surely this
can be extended further more. One idea that I had was for example to
pass as parameter to init() and init_from_backup() a set of key/values
that would be appended to postgresql.conf.

> I'm thinking of having the tests append an include_dir directive when they
> create a node, maintain a hash of all parameters and rewrite a
> postgresql.conf.taptests file in the include_dir when params are updated.
> Also exposing a 'reload' call.

The reload wrapper would make sense to have. That has not proved to be
necessary yet.
-- 
Michael
From 258dc4978b682f4fed953a5857fc3f50aacc8342 Mon Sep 17 00:00:00 2001
From: Michael Paquier 
Date: Mon, 21 Dec 2015 16:28:34 +0900
Subject: [PATCH 1/3] Fix default configuration of MSVC builds ignoring TAP
 tests

MSVC build scripts use a flag to track if TAP tests are supported or not
but this was not configured correctly. By default, like the other build
types using ./configure, this is disabled.
---
 src/tools/msvc/Solution.pm   |  1 +
 src/tools/msvc/config_default.pl | 27 ++-
 2 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm
index ac116b7..c5a43f9 100644
--- a/src/tools/msvc/Solution.pm
+++ b/src/tools/msvc/Solution.pm
@@ -643,6 +643,7 @@ sub GetFakeConfigure
 	$cfg .= ' --enable-integer-datetimes'
 	  if ($self->{options}->{integer_datetimes});
 	$cfg .= ' --enable-nls' if ($self->{options}->{nls});
+	$cfg .= ' --enable-tap-tests' if ($self->{options}->{tap_tests});
 	$cfg .= ' --with-ldap'  if ($self->{options}->{ldap});
 	$cfg .= ' --without-zlib' unless ($self->{options}->{zlib});
 	$cfg .= ' --with-extra-version' if ($self->{options}->{extraver});
diff --git a/src/tools/msvc/config_default.pl b/src/tools/msvc/config_default.pl
index b9f2ff4..e50be7e 100644
--- a/src/tools/msvc/config_default.pl
+++ b/src/tools/msvc/config_default.pl
@@ -3,7 +3,7 @@ use strict;
 use warnings;
 
 our $config = {
-	asserts => 0,# --enable-cassert
+	asserts  => 0,# --enable-cassert
 	  # 

Re: [HACKERS] psql completion for ids in multibyte string

2016-02-25 Thread Kyotaro HORIGUCHI
Hello, this is the second patch plitted out. This allows
multibyte names to be completed in psql.

At Fri, 06 Nov 2015 11:47:17 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
 wrote in 
<20151106.114717.170526268.horiguchi.kyot...@lab.ntt.co.jp>
> At Thu, 5 Nov 2015 18:32:59 +0900, Amit Langote 
>  wrote in <563b224b.3020...@lab.ntt.co.jp>
> > On 2015/11/05 18:10, Kyotaro HORIGUCHI wrote:
> > > Hello. I don't know whether this is a bug fix or improvement,
> > 
> > Would it be 50-50? :-)
> 
> Yeah, honestly saying, I doubt that this is valuable but feel
> uneasy to see some of the candidates vanish as completon proceeds
> for no persuasive reason.

The current version of tab-completion failed to complete names
with multibyte characters.

=# create table いろは (あ int);
=# create table いこい (あ int);
=# drop table 
"いろは" hint_plan.   pg_toast.
"いこい" information_schema.  pg_toast_temp_1.
ALL IN TABLESPACEpg_catalog.  public.
dbms_stats.  pg_temp_1.   
postgres=# alter table "い
=# drop table "い
=# drop table "い   /* No candidate offered */

This is because _complet_from_query counts the string length in
bytes, instead of characters. With this patch the candidates will
appear.

=# drop table "い
"いこい"  "いろは"  
postgres=# drpo table "い

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
>From 6c6871f25eb9f7e5bdcc1005dab4cdd29a15b7d0 Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi 
Date: Fri, 26 Feb 2016 14:24:42 +0900
Subject: [PATCH] Fix identifier completion with multibyte characters.

_copletion_from_query wrongly takes the byte length of the given text
instead of character length. This prevents multibyte identifiers from
showing as candidates for completion. This patch fixes it.
---
 src/bin/psql/tab-complete.c | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 5f27120..952db3a 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3197,8 +3197,8 @@ complete_from_schema_query(const char *text, int state)
 static char *
 _complete_from_query(int is_schema_query, const char *text, int state)
 {
-	static int	list_index,
-string_length;
+	static int	list_index;
+	int 		string_length = 0;
 	static PGresult *result = NULL;
 
 	/*
@@ -3211,9 +3211,16 @@ _complete_from_query(int is_schema_query, const char *text, int state)
 		char	   *e_text;
 		char	   *e_info_charp;
 		char	   *e_info_charp2;
+		const char *pstr = text;
 
 		list_index = 0;
-		string_length = strlen(text);
+
+		/* count length as a multibyte text */
+		while (*pstr)
+		{
+			string_length++;
+			pstr += PQmblen(pstr, pset.encoding);
+		}
 
 		/* Free any prior result */
 		PQclear(result);
-- 
1.8.3.1


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Fixing wrong comment on PQmblen and PQdsplen.

2016-02-25 Thread Kyotaro HORIGUCHI
Hello,

I divided the last patch into one typo-fix patch and one
improvement patch. This is the former one.

At Fri, 06 Nov 2015 11:47:17 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
 wrote in 
<20151106.114717.170526268.horiguchi.kyot...@lab.ntt.co.jp>
> > Just a minor nitpick -
> > 
> > ... character beginning *at* s ...?
> > 
> > If so, there would be one more instance to fix.
> 
> I think so.  I overlooked both of them. And as you mention,
> PQdsplen has the same kind of typo. It returns display length of
> the *character* beginning *at* s, too.

This patch fixes wrong comments of PQmblen() and PQdsplen().  The
comments say that "returns the length of the word beginning s"
but what it actually does is "returns the length of the
*character* beginning at s".

regards,

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
>From fed3b85cdca897d4439bb0589fd39642826b84b8 Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi 
Date: Fri, 26 Feb 2016 14:05:24 +0900
Subject: [PATCH] Fix wrong comments for PQmblen() and PQdsplen().

These functions count a length actually of a character but wrongly
documented that length of a word in their comments. This patch fixes
them.
---
 src/interfaces/libpq/fe-misc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 9be05a0..30cee7f 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1180,7 +1180,7 @@ pqSocketPoll(int sock, int forRead, int forWrite, time_t end_time)
  */
 
 /*
- * returns the byte length of the word beginning s, using the
+ * returns the byte length of the character beginning at s, using the
  * specified encoding.
  */
 int
@@ -1190,7 +1190,7 @@ PQmblen(const char *s, int encoding)
 }
 
 /*
- * returns the display length of the word beginning s, using the
+ * returns the display length of the character beginning at s, using the
  * specified encoding.
  */
 int
-- 
1.8.3.1


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] In-core regression tests for replication, cascading, archiving, PITR, etc.

2016-02-25 Thread Craig Ringer
On 26 February 2016 at 10:58, Michael Paquier 
wrote:


>
> Here is a rebased set after the conflicts created by e640093, with the
> following changes:
>

Thanks for rebasing on top of that. Not totally fair when your patch came
first, but I guess it was simpler to merge the other one first.


> - In 0002, added perldoc for new promote routine
> - In 0003, added perldoc documentation for the new options introduced
> in init and init_from_backup, and fixed some log entries not using the
> node name to identify the node involved when enabling archive,
> streaming or recovery.
>

Very much appreciated.


> - Craig has pinged me regarding tap_tests being incorrectly updated in
> config_default.pl in 0003.
> I just re-ran the tests on OSX and Windows (MSVC 2010 with Win7) to be
> sure that nothing broke, and nothing has been reported as broken.
>
>

I've looked over the tests. I see that you've updated the docs for the
Windows tests to reflect the changes, which is good, thanks.

I like the patch and would love to see it committed soon.

I do have one major disagreement, which is that you turn autovacuum off if
streaming is enabled. This is IMO completely wrong and must be removed.
It's making the tests ignore a major and important part of real-world use.

If you did it to make it easier to test replay catchup etc, just use
pg_xlog_location_diff instead of an equality test. Instead of:

my $caughtup_query = "SELECT '$current_lsn'::pg_lsn <=
pg_last_xlog_replay_location()";

use

my $caughtup_query = "SELECT pg_xlog_location_diff('$current_lsn',
pg_last_xlog_replay_location()) <= 0";

so it doesn't care if we replay past the expected LSN on the master due to
autovacuum activity. That's what's done in the real world and what should
be covered by the tests IMO.


The patch sets

tap_tests => 1,

in config_default.pl. Was that on purpose? I'd have no problem with running
the TAP tests by default if they worked by default, but the docs say that
at least with ActiveState's Perl you have to jump through some hoops to get
IPC::Run.

Typo in PostgresNode.pm: passiong should be 'passing' .


Otherwise looks _really_ good and I'd love to see this committed very soon.


I'd like a way to append parameters in a way that won't clobber settings
made implicitly by the module through things like enable_streaming but I
can add that in a followup patch. It doesn't need to complicate this one.
I'm thinking of having the tests append an include_dir directive when they
create a node, maintain a hash of all parameters and rewrite a
postgresql.conf.taptests file in the include_dir when params are updated.
Also exposing a 'reload' call.


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] IF (NOT) EXISTS in psql-completion

2016-02-25 Thread Kyotaro HORIGUCHI
Hello, thank you for the comments.

At Mon, 15 Feb 2016 15:43:57 +0300, Artur Zakirov  
wrote in <56c1c80d.7020...@postgrespro.ru>
> On 05.02.2016 11:09, Kyotaro HORIGUCHI wrote:
> > Hello,
> >
> > I considered how to make tab-completion robust for syntactical
> > noises, in other words, optional words in syntax. Typically "IF
> > (NOT) EXISTS", UNIQUE and TEMPORARY are words that don't affect
> > further completion. However, the current delimit-matching
> > mechanism is not so capable (or is complexty-prone) to live with
> > such noises. I have proposed to use regular expressions or
> > simplified one for the robustness but it was too complex to be
> > applied.
> >
> > This is another answer for the problem. Removal of such words
> > on-the-fly makes further matching more robust.
> >
> > Next, currently some CREATE xxx subsyntaxes of CREATE SCHEMA are
> > matched using TailMatching but it makes difficult the
> > options-removal operations, which needs forward matching.
> >
> > So I introduced two things to resolve them by this patch.
> >
> 
> I did some tests with your patch. But I am not confident in
> tab-complete.c.
> 
> And I have some notes:
> 
> 1 - I execute git apply command and get the following warning:
> 
> ../0001-Suggest-IF-NOT-EXISTS-for-tab-completion-of-psql.patch:302:
> trailing whitespace.
>   /*
> warning: 1 line adds whitespace errors.
> 
> This is because of superfluous whitespace I think.

Oops. I'll remove it.

> 2 - In psql I write "create table if" and press . psql adds the
> following:
> 
> create table IF NOT EXISTS
> 
> I think psql should continue with lower case if user wrote query with
> loser case text:

Good catch! Hmm. COMPLETE_WITH_SCHEMA_QUERY() does it. For
example, the following existing completion behaves in the same
way.

"drop index c" ==> "drop index CONCURRENTLY"

The results of schema queries should be treated in case-sensitive
way so the additional keywords by 'UNION' are also treated
case-sensitively.

COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_indexes,
   " UNION SELECT 'CONCURRENTLY'");

Fixing this is another problem. So I'd like to leave this alone
here.

> create table if not exists
> 
> 3 - Same with "IF EXISTS". If a write "alter view if" and press 
> psql writes:
> 
> alter view IF EXISTS

regards,

-- 
堀口恭太郎

日本電信電話株式会社 NTTオープンソースソフトウェアセンタ
Phone: 03-5860-5115 / Fax: 03-5463-5490




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] improving GROUP BY estimation

2016-02-25 Thread Tomas Vondra

Hi,

On 02/26/2016 04:32 AM, Mark Dilger wrote:



On Feb 25, 2016, at 4:59 PM, Tomas Vondra  wrote:


...


The culprit here is that the two columns are not independent, but
are (rather strongly) correlated due to the way you've generated
the data.


Yes, that was intentional. Your formula is exactly correct, so far as
i can tell, for completely uncorrelated data. I don't have any tables
with completely uncorrelated data, and was therefore interested in
what happens when the data is correlated and your patch is applied.

BTW, the whole reason I responded to your post is that I think I would like
to have your changes in the code base.  I'm just playing Devil's Advocate
here, to see if there is room for any improvement.


Sure, that's how I understood it. I just wanted to point out the 
correlation, as that might not have been obvious to others.



Thanks for the patch submission. I hope my effort to review it is on
the whole more helpful than harmful.


Thanks for the feedback!

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Incorrect formula for SysV IPC parameters

2016-02-25 Thread Kyotaro HORIGUCHI
Thank you for pushing this.

At Tue, 16 Feb 2016 15:07:32 +0900, Fujii Masao  wrote 
in 

Re: [HACKERS] improving GROUP BY estimation

2016-02-25 Thread Mark Dilger

> On Feb 25, 2016, at 4:59 PM, Tomas Vondra  
> wrote:
> 
> Hi,
> 
> On 02/26/2016 12:16 AM, Mark Dilger wrote:
>> 
>> It is not hard to write test cases where your patched version overestimates
>> the number of rows by a very similar factor as the old code underestimates
>> them.  My very first test, which was not specifically designed to demonstrate
>> this, happens to be one such example:
>> 
>> 
>> CREATE TABLE t (a INT, b int);
>> INSERT INTO t SELECT sqrt(gs)::int, gs FROM generate_series(1,1000) gs;
>> ANALYZE t;
>> EXPLAIN SELECT a FROM t WHERE b < 1000 GROUP BY a;
>>   QUERY PLAN
>> ---
>>  HashAggregate  (cost=169250.21..169258.71 rows=850 width=4)
>>Group Key: a
>>->  Seq Scan on t  (cost=0.00..169247.71 rows=1000 width=4)
>>  Filter: (b < 1000)
>> (4 rows)
>> 
>> SELECT COUNT(*) FROM (SELECT a FROM t WHERE b < 1000 GROUP BY a) AS ss;
>>  count
>> ---
>> 32
>> (1 row)
>> 
>> 
>> 
>> So, it estimates 850 rows where only 32 are returned . Without
>> applying your patch, it estimates just 1 row where 32 are returned.
>> That's an overestimate of roughly 26 times, rather than an
>> underestimate of 32 times.
> 
> The culprit here is that the two columns are not independent, but are (rather 
> strongly) correlated due to the way you've generated the data.

Yes, that was intentional.  Your formula is exactly correct, so far as i can 
tell,
for completely uncorrelated data.  I don't have any tables with completely
uncorrelated data, and was therefore interested in what happens when the
data is correlated and your patch is applied.

BTW, the whole reason I responded to your post is that I think I would like
to have your changes in the code base.  I'm just playing Devil's Advocate
here, to see if there is room for any improvement.

> In cases like this (with correlated columns), it's mostly just luck how 
> accurate estimates you get, no matter which of the estimators you use. It's 
> pretty easy to generate arbitrary errors by breaking the independence 
> assumption, and it's not just this particular place of the planner. And it 
> won't change until we add some sort of smartness about dependencies between 
> columns.
> 
> I think over-estimates are a bit more acceptable in this case, e.g. because 
> of the HashAggregate OOM risk. Also, I've seen too many nested loop cascades 
> due to under-estimates recently, but that's a bit unrelated.

I have seen similar problems in systems I maintain, hence my interest
in your patch.


>> As a patch review, I'd say that your patch does what you claim it
>> does, and it applies cleanly, and passes the regression tests with
>> my included modifications. I think there needs to be some discussion
>> on the list about whether the patch is agood idea.
> 
> Yes, that's why I haven't bothered with fixing the regression tests in the 
> patch, actually.

Right, but for the group as a whole, I thought it might generate some
feedback if people saw what else changed in the regression results.
If you look, the changes have to do with plans chosen and row estimates --
exactly the sort of stuff you would expect to change.

Thanks for the patch submission.  I hope my effort to review it is on the
whole more helpful than harmful.

Mark Dilger

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] In-core regression tests for replication, cascading, archiving, PITR, etc.

2016-02-25 Thread Michael Paquier
On Wed, Feb 17, 2016 at 9:52 PM, Michael Paquier
 wrote:
> On Fri, Feb 5, 2016 at 4:17 AM, Michael Paquier wrote:
>> Thanks for your enthusiasm. Now, to do an auto-critic of my patch:
>> +   if ($params{allows_streaming})
>> +   {
>> +   print $conf "wal_level = hot_standby\n";
>> +   print $conf "max_wal_senders = 5\n";
>> +   print $conf "wal_keep_segments = 20\n";
>> +   print $conf "max_wal_size = 128MB\n";
>> +   print $conf "shared_buffers = 1MB\n";
>> +   print $conf "wal_log_hints = on\n";
>> +   print $conf "hot_standby = on\n";
>> +   }
>> This could have more thoughts, particularly for wal_log_hints which is
>> not used all the time, I think that we'd actually want to complete
>> that with an optional hash of parameter/values that get appended at
>> the end of the configuration file, then pass wal_log_hints in the
>> tests where it is needed. The default set of parameter is maybe fine
>> if done this way, still wal_keep_segments could be removed.
>
> At the end I have refrained from doing that, and refactoring
> setup_clus...@rewindtest.pm to use the new option allows_streaming,
> simplifying a bit the code. The introduction of allows_streaming could
> be done in a separate patch, though it did not seem worth the
> complication when hacking at that. The split is simple, though.
>
>> +# Tets for timeline switch
>> +# Encure that a standby is able to follow a newly-promoted standby
>> Two typos in two lines.
>
> Fixed.
>
> I also found an issue with the use of application_name causing test
> 001 to fail, bug squashed on the way. The generation of paths for
> archive_command and restore_command was incorrect on Windows. Those
> need to use two backslashes (to detect correctly files) and need to be
> double-quoted (to avoid errors with command copy should a space be
> included in the path). I have added as well a new subcommand in
> vcregress.pl called recoverycheck where one can run the recovery test
> suite on Windows using MSVC.
>
> Attached are rebased patches, split into 3 parts doing the following:
> - 0001, fix default configuration of MSVC builds ignoring TAP tests
> - 0002, add a promote routine in PostgresNode.pm. pg_rewind's tests
> can make immediate use of that.
> - 0003, the actual test suite.
> This is registered in CF 2016-03 as well for further consideration.

Here is a rebased set after the conflicts created by e640093, with the
following changes:
- In 0002, added perldoc for new promote routine
- In 0003, added perldoc documentation for the new options introduced
in init and init_from_backup, and fixed some log entries not using the
node name to identify the node involved when enabling archive,
streaming or recovery.
- Craig has pinged me regarding tap_tests being incorrectly updated in
config_default.pl in 0003.
I just re-ran the tests on OSX and Windows (MSVC 2010 with Win7) to be
sure that nothing broke, and nothing has been reported as broken.
-- 
Michael
From 258dc4978b682f4fed953a5857fc3f50aacc8342 Mon Sep 17 00:00:00 2001
From: Michael Paquier 
Date: Mon, 21 Dec 2015 16:28:34 +0900
Subject: [PATCH 1/3] Fix default configuration of MSVC builds ignoring TAP
 tests

MSVC build scripts use a flag to track if TAP tests are supported or not
but this was not configured correctly. By default, like the other build
types using ./configure, this is disabled.
---
 src/tools/msvc/Solution.pm   |  1 +
 src/tools/msvc/config_default.pl | 27 ++-
 2 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm
index ac116b7..c5a43f9 100644
--- a/src/tools/msvc/Solution.pm
+++ b/src/tools/msvc/Solution.pm
@@ -643,6 +643,7 @@ sub GetFakeConfigure
 	$cfg .= ' --enable-integer-datetimes'
 	  if ($self->{options}->{integer_datetimes});
 	$cfg .= ' --enable-nls' if ($self->{options}->{nls});
+	$cfg .= ' --enable-tap-tests' if ($self->{options}->{tap_tests});
 	$cfg .= ' --with-ldap'  if ($self->{options}->{ldap});
 	$cfg .= ' --without-zlib' unless ($self->{options}->{zlib});
 	$cfg .= ' --with-extra-version' if ($self->{options}->{extraver});
diff --git a/src/tools/msvc/config_default.pl b/src/tools/msvc/config_default.pl
index b9f2ff4..e50be7e 100644
--- a/src/tools/msvc/config_default.pl
+++ b/src/tools/msvc/config_default.pl
@@ -3,7 +3,7 @@ use strict;
 use warnings;
 
 our $config = {
-	asserts => 0,# --enable-cassert
+	asserts  => 0,# --enable-cassert
 	  # integer_datetimes=>1,   # --enable-integer-datetimes - on is now default
 	  # float4byval=>1, # --disable-float4-byval, on by default
 
@@ -13,18 +13,19 @@ our $config = {
 	# blocksize => 8, # --with-blocksize, 8kB by default
 	# wal_blocksize => 8, # --with-wal-blocksize, 8kB by default
 	# wal_segsize => 16,  # --with-wal-segsize, 16MB by default
-	ldap => 1,# 

Re: [HACKERS] Performance degradation in commit 6150a1b0

2016-02-25 Thread Amit Kapila
On Thu, Feb 25, 2016 at 11:38 PM, Simon Riggs  wrote:

> On 24 February 2016 at 23:26, Amit Kapila  wrote:
>
>> From past few weeks, we were facing some performance degradation in the
>> read-only performance bench marks in high-end machines.  My colleague
>> Mithun, has tried by reverting commit ac1d794 which seems to degrade the
>> performance in HEAD on high-end m/c's as reported previously[1], but still
>> we were getting degradation, then we have done some profiling to see what
>> has caused it  and we found that it's mainly caused by spin lock when
>> called via pin/unpin buffer and then we tried by reverting commit 6150a1b0
>> which has recently changed the structures in that area and it turns out
>> that reverting that patch, we don't see any degradation in performance.
>> The important point to note is that the performance degradation doesn't
>> occur every time, but if the tests are repeated twice or thrice, it
>> is easily visible.
>>
>
> Not seen that on the original patch I posted. 6150a1b0 contains multiple
> changes to the lwlock structures, one written by me, others by Andres.
>
> Perhaps we should revert that patch and re-apply the various changes in
> multiple commits so we can see the differences.
>
>
Yes, thats one choice, other is locally we can narrow down the root cause
of problem and then try to address the same.  Last time similar issue came
up on list, agreement [1] was to note down it in PostgreSQL 9.6 open items
and then work on it.  I think for this problem, we haven't got to the root
cause of problem, so we can try to investigate it.  If nobody else steps up
to reproduce and look into problem, in few days, I will look into it.

[1] -
http://www.postgresql.org/message-id/CA+TgmoYjYqegXzrBizL-Ov7zDsS=GavCnxYnGn9WZ1S=rp8...@mail.gmail.com

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Support for N synchronous standby servers - take 2

2016-02-25 Thread Kyotaro HORIGUCHI
At Fri, 26 Feb 2016 10:38:22 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
 wrote in 
<20160226.103822.12680005.horiguchi.kyot...@lab.ntt.co.jp>
> Hello, Thanks for the new patch.
> 
> 
> At Fri, 26 Feb 2016 08:52:54 +0900, Masahiko Sawada  
> wrote in 

Re: [HACKERS] Support for N synchronous standby servers - take 2

2016-02-25 Thread Kyotaro HORIGUCHI
Hello, Thanks for the new patch.


At Fri, 26 Feb 2016 08:52:54 +0900, Masahiko Sawada  
wrote in 

Re: [HACKERS] improving GROUP BY estimation

2016-02-25 Thread Tomas Vondra

Hi,

On 02/26/2016 12:16 AM, Mark Dilger wrote:


It is not hard to write test cases where your patched version overestimates
the number of rows by a very similar factor as the old code underestimates
them.  My very first test, which was not specifically designed to demonstrate
this, happens to be one such example:


CREATE TABLE t (a INT, b int);
INSERT INTO t SELECT sqrt(gs)::int, gs FROM generate_series(1,1000) gs;
ANALYZE t;
EXPLAIN SELECT a FROM t WHERE b < 1000 GROUP BY a;
   QUERY PLAN
---
  HashAggregate  (cost=169250.21..169258.71 rows=850 width=4)
Group Key: a
->  Seq Scan on t  (cost=0.00..169247.71 rows=1000 width=4)
  Filter: (b < 1000)
(4 rows)

SELECT COUNT(*) FROM (SELECT a FROM t WHERE b < 1000 GROUP BY a) AS ss;
  count
---
 32
(1 row)



So, it estimates 850 rows where only 32 are returned . Without
applying your patch, it estimates just 1 row where 32 are returned.
That's an overestimate of roughly 26 times, rather than an
underestimate of 32 times.


The culprit here is that the two columns are not independent, but are 
(rather strongly) correlated due to the way you've generated the data.


In cases like this (with correlated columns), it's mostly just luck how 
accurate estimates you get, no matter which of the estimators you use. 
It's pretty easy to generate arbitrary errors by breaking the 
independence assumption, and it's not just this particular place of the 
planner. And it won't change until we add some sort of smartness about 
dependencies between columns.


I think over-estimates are a bit more acceptable in this case, e.g. 
because of the HashAggregate OOM risk. Also, I've seen too many nested 
loop cascades due to under-estimates recently, but that's a bit unrelated.



As a patch review, I'd say that your patch does what you claim it
does, and it applies cleanly, and passes the regression tests with
my included modifications. I think there needs to be some discussion
on the list about whether the patch is agood idea.


Yes, that's why I haven't bothered with fixing the regression tests in 
the patch, actually.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] insufficient qualification of some objects in dump files

2016-02-25 Thread Peter Eisentraut
On 1/29/16 1:24 AM, Michael Paquier wrote:
>> I think we should amend the archive tag for these kinds of objects to
>> > include the table name, so it might look like:
>> >
>> > 2153; 2604 39696 DEFAULT public test a rolename
>> >
>> > Comments?
> +1. I noticed that this limitation is present for triggers (as you
> mentioned), constraints, fk constraints and RLS policies which should
> be completed with a table name.

Thank you for collecting this list.  Attached is a patch for this.

Tom thought this might require an archive version dump, but I'm not
sure.  The tags are more of an informational string for human
consumption, not strictly part of the archive format.

From 44a23b4e960040f1e5de7a0ae0ecb6de432c4027 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Thu, 25 Feb 2016 18:56:37 -0500
Subject: [PATCH] pg_dump: Add table qualifications to some tags

Some object types have names that are only unique for one table.  But
for those we generally didn't put the table name into the dump TOC tag.
So it was impossible to identify these objects if the same name was used
for multiple tables.  This affects policies, column defaults,
constraints, triggers, and rules.

Fix by adding the table name to the TOC tag, so that it now reads
"$schema $table $object".
---
 src/bin/pg_dump/pg_dump.c | 42 ++
 1 file changed, 34 insertions(+), 8 deletions(-)

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 64c2673..8bb134d 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3057,6 +3057,7 @@ dumpPolicy(Archive *fout, PolicyInfo *polinfo)
 	PQExpBuffer query;
 	PQExpBuffer delqry;
 	const char *cmd;
+	char	   *tag;
 
 	if (dopt->dataOnly)
 		return;
@@ -3124,8 +3125,10 @@ dumpPolicy(Archive *fout, PolicyInfo *polinfo)
 	appendPQExpBuffer(delqry, "DROP POLICY %s", fmtId(polinfo->polname));
 	appendPQExpBuffer(delqry, " ON %s;\n", fmtId(tbinfo->dobj.name));
 
+	tag = psprintf("%s %s", tbinfo->dobj.name, polinfo->dobj.name);
+
 	ArchiveEntry(fout, polinfo->dobj.catId, polinfo->dobj.dumpId,
- polinfo->dobj.name,
+ tag,
  polinfo->dobj.namespace->dobj.name,
  NULL,
  tbinfo->rolname, false,
@@ -3134,6 +3137,7 @@ dumpPolicy(Archive *fout, PolicyInfo *polinfo)
  NULL, 0,
  NULL, NULL);
 
+	free(tag);
 	destroyPQExpBuffer(query);
 	destroyPQExpBuffer(delqry);
 }
@@ -14626,6 +14630,7 @@ dumpAttrDef(Archive *fout, AttrDefInfo *adinfo)
 	int			adnum = adinfo->adnum;
 	PQExpBuffer q;
 	PQExpBuffer delq;
+	char	   *tag;
 
 	/* Skip if table definition not to be dumped */
 	if (!tbinfo->dobj.dump || dopt->dataOnly)
@@ -14654,8 +14659,10 @@ dumpAttrDef(Archive *fout, AttrDefInfo *adinfo)
 	appendPQExpBuffer(delq, "ALTER COLUMN %s DROP DEFAULT;\n",
 	  fmtId(tbinfo->attnames[adnum - 1]));
 
+	tag = psprintf("%s %s", tbinfo->dobj.name, tbinfo->attnames[adnum - 1]);
+
 	ArchiveEntry(fout, adinfo->dobj.catId, adinfo->dobj.dumpId,
- tbinfo->attnames[adnum - 1],
+ tag,
  tbinfo->dobj.namespace->dobj.name,
  NULL,
  tbinfo->rolname,
@@ -14664,6 +14671,7 @@ dumpAttrDef(Archive *fout, AttrDefInfo *adinfo)
  NULL, 0,
  NULL, NULL);
 
+	free(tag);
 	destroyPQExpBuffer(q);
 	destroyPQExpBuffer(delq);
 }
@@ -14804,6 +14812,7 @@ dumpConstraint(Archive *fout, ConstraintInfo *coninfo)
 	TableInfo  *tbinfo = coninfo->contable;
 	PQExpBuffer q;
 	PQExpBuffer delq;
+	char	   *tag = NULL;
 
 	/* Skip if not to be dumped */
 	if (!coninfo->dobj.dump || dopt->dataOnly)
@@ -14897,8 +14906,10 @@ dumpConstraint(Archive *fout, ConstraintInfo *coninfo)
 		appendPQExpBuffer(delq, "DROP CONSTRAINT %s;\n",
 		  fmtId(coninfo->dobj.name));
 
+		tag = psprintf("%s %s", tbinfo->dobj.name, coninfo->dobj.name);
+
 		ArchiveEntry(fout, coninfo->dobj.catId, coninfo->dobj.dumpId,
-	 coninfo->dobj.name,
+	 tag,
 	 tbinfo->dobj.namespace->dobj.name,
 	 indxinfo->tablespace,
 	 tbinfo->rolname, false,
@@ -14930,8 +14941,10 @@ dumpConstraint(Archive *fout, ConstraintInfo *coninfo)
 		appendPQExpBuffer(delq, "DROP CONSTRAINT %s;\n",
 		  fmtId(coninfo->dobj.name));
 
+		tag = psprintf("%s %s", tbinfo->dobj.name, coninfo->dobj.name);
+
 		ArchiveEntry(fout, coninfo->dobj.catId, coninfo->dobj.dumpId,
-	 coninfo->dobj.name,
+	 tag,
 	 tbinfo->dobj.namespace->dobj.name,
 	 NULL,
 	 tbinfo->rolname, false,
@@ -14965,8 +14978,10 @@ dumpConstraint(Archive *fout, ConstraintInfo *coninfo)
 			appendPQExpBuffer(delq, "DROP CONSTRAINT %s;\n",
 			  fmtId(coninfo->dobj.name));
 
+			tag = psprintf("%s %s", tbinfo->dobj.name, coninfo->dobj.name);
+
 			ArchiveEntry(fout, coninfo->dobj.catId, coninfo->dobj.dumpId,
-		 coninfo->dobj.name,
+		 tag,
 		 tbinfo->dobj.namespace->dobj.name,
 		 NULL,
 		 tbinfo->rolname, false,
@@ -15001,8 +15016,10 @@ dumpConstraint(Archive *fout, ConstraintInfo *coninfo)
 			appendPQExpBuffer(delq, "DROP CONSTRAINT %s;\n",
 		

Re: [HACKERS] Writing new unit tests with PostgresNode

2016-02-25 Thread Alvaro Herrera
Thanks, pushed.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] improving GROUP BY estimation

2016-02-25 Thread Mark Dilger

> On Feb 25, 2016, at 3:16 PM, Mark Dilger  wrote:
> 
> 
>> On Feb 23, 2016, at 5:12 PM, Tomas Vondra  
>> wrote:
>> 
>> 
>> 
>> So much better. Clearly, there are cases where this will over-estimate the 
>> cardinality - for example when the values are somehow correlated.
>> 
> 
> I applied your patch, which caused a few regression tests to fail.  Attached
> is a patch that includes the necessary changes to the expected test results.
> 
> It is not hard to write test cases where your patched version overestimates
> the number of rows by a very similar factor as the old code underestimates
> them.  My very first test, which was not specifically designed to demonstrate
> this, happens to be one such example:
> 
> 
> CREATE TABLE t (a INT, b int);
> INSERT INTO t SELECT sqrt(gs)::int, gs FROM generate_series(1,1000) gs;
> ANALYZE t;
> EXPLAIN SELECT a FROM t WHERE b < 1000 GROUP BY a;
>  QUERY PLAN
> ---
> HashAggregate  (cost=169250.21..169258.71 rows=850 width=4)
>   Group Key: a
>   ->  Seq Scan on t  (cost=0.00..169247.71 rows=1000 width=4)
> Filter: (b < 1000)
> (4 rows)
> 
> SELECT COUNT(*) FROM (SELECT a FROM t WHERE b < 1000 GROUP BY a) AS ss;
> count 
> --- 
>32
> (1 row)
> 
> 
> 
> So, it estimates 850 rows where only 32 are returned .  Without applying your 
> patch,
> it estimates just 1 row where 32 are returned.  That's an overestimate of 
> roughly 26 times,
> rather than an underestimate of 32 times.
> 
> As a patch review, I'd say that your patch does what you claim it does, and 
> it applies
> cleanly, and passes the regression tests with my included modifications.  I 
> think there
> needs to be some discussion on the list about whether the patch is a good 
> idea.
> 
> Mark Dilger
> 
> 
> 

Assuming the goal is to minimize the degree to which the estimates are 
inaccurate, I 
get better results by a kind of averaging of the old values from the current 
code base
and the new values from Tomas's patch.  I tried this and at least for the few 
examples
we've been throwing around, I found the results were not as wildly inaccurate 
in the
worst case examples than in either of the other two approaches:

diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 46c95b0..c83135a 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -3414,6 +3414,8 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, 
double input_rows,
Assert(rel->reloptkind == RELOPT_BASEREL);
if (rel->tuples > 0)
{
+   double  old_factor; /* The way it 
is currently done on master */
+   double  new_factor; /* The way 
Tomas Vondra proposes doing it */
/*
 * Clamp to size of rel, or size of rel / 10 if 
multiple Vars. The
 * fudge factor is because the Vars are probably 
correlated but we
@@ -3440,7 +3442,10 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, 
double input_rows,
/*
 * Multiply by restriction selectivity.
 */
-   reldistinct *= rel->rows / rel->tuples;
+   old_factor = rel->rows / rel->tuples;   /* old 
way */
+   new_factor = (1 - powl((reldistinct - 1) / reldistinct, 
rel->rows));  /* Tomas's way */
+
+   reldistinct *= sqrt(old_factor * new_factor);   /* 
average of old way and new way, sort of */
 
/*
 * Update estimate of total distinct groups.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for N synchronous standby servers - take 2

2016-02-25 Thread Masahiko Sawada
On Fri, Feb 26, 2016 at 1:23 AM, Masahiko Sawada  wrote:
> Attached latest patch includes document patch.
>
>> When I changed s_s_names to 'hoge*' and reloaded the configuration file,
>> the server crashed unexpectedly with the following error message.
>> This is obviously a bug.
>
> Fixed.
>
>>   - allows any byte except a double quote in double-quoted
>>representation. A double-quote just after a delimiter can open
>>quoted representation.
>
> No. double quote is also allowed in double-quoted representation using
> by two double-quotes.
> if s_s_names = '"node""hoge"' then standby name will be 'node"hoge'.
>
>>
>> I have no problem with it. The attached new sample parser does
>> so.
>>
>> By the way, your parser also complains for an example I've seen
>> somewhere upthread "1[2,3,4]". This is because '2', '3' and '4'
>> are regarded as INT, not NAME. Whether a sequence of digits is a
>> prefix number of a list or a host name cannot be identified until
>> reading some following characters. So my previous test.l defined
>> NAME_OR_INTEGER and it is distinguished in the grammar side to
>> resolve this problem.
>>
>> If you want them identified in the lexer side, it should do
>> looking-forward as {prefix} in the attached
>> test.l does. This makes the lexer a bit complex but in contrast
>> test.y simpler. The test.l, test.y attached got refactored but .l
>> gets a bit tricky..
>
> I think that lexer can pass both INT and NAME as char* to parser, and
> then parser regards them as integer or char*.
> It would be more simple.
> Thoughts?
>
> Thank you for giving lexer and parser example but I'm not sure that it
> makes thing more easier.
> It seems to make thing more complex.
>
> Attached patch handles parameter using similar way as postgres parses SQL.
> Please having a look it and give me feedbacks.
>

Previous patch could not parse one character standby name correctly.
Attached latest patch.

Regards,

--
Masahiko Sawada


000_multi_sync_replication_v12.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] improving GROUP BY estimation

2016-02-25 Thread Mark Dilger

> On Feb 23, 2016, at 5:12 PM, Tomas Vondra  
> wrote:
> 
> 
> 
> So much better. Clearly, there are cases where this will over-estimate the 
> cardinality - for example when the values are somehow correlated.
> 

I applied your patch, which caused a few regression tests to fail.  Attached
is a patch that includes the necessary changes to the expected test results.

It is not hard to write test cases where your patched version overestimates
the number of rows by a very similar factor as the old code underestimates
them.  My very first test, which was not specifically designed to demonstrate
this, happens to be one such example:


CREATE TABLE t (a INT, b int);
INSERT INTO t SELECT sqrt(gs)::int, gs FROM generate_series(1,1000) gs;
ANALYZE t;
EXPLAIN SELECT a FROM t WHERE b < 1000 GROUP BY a;
  QUERY PLAN
---
 HashAggregate  (cost=169250.21..169258.71 rows=850 width=4)
   Group Key: a
   ->  Seq Scan on t  (cost=0.00..169247.71 rows=1000 width=4)
 Filter: (b < 1000)
(4 rows)

SELECT COUNT(*) FROM (SELECT a FROM t WHERE b < 1000 GROUP BY a) AS ss;
 count 
--- 
32
(1 row)



So, it estimates 850 rows where only 32 are returned .  Without applying your 
patch,
it estimates just 1 row where 32 are returned.  That's an overestimate of 
roughly 26 times,
rather than an underestimate of 32 times.

As a patch review, I'd say that your patch does what you claim it does, and it 
applies
cleanly, and passes the regression tests with my included modifications.  I 
think there
needs to be some discussion on the list about whether the patch is a good idea.

Mark Dilger


diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 46c95b0..82a7f7f 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -3438,9 +3438,9 @@ estimate_num_groups(PlannerInfo *root, List *groupExprs, 
double input_rows,
reldistinct = clamp;
 
/*
-* Multiply by restriction selectivity.
+* Estimate number of distinct values expected in given 
number of rows.
 */
-   reldistinct *= rel->rows / rel->tuples;
+   reldistinct *= (1 - powl((reldistinct - 1) / 
reldistinct, rel->rows));
 
/*
 * Update estimate of total distinct groups.
diff --git a/src/test/regress/expected/join.out 
b/src/test/regress/expected/join.out
index 59d7877..d9dd5ca 100644
--- a/src/test/regress/expected/join.out
+++ b/src/test/regress/expected/join.out
@@ -3951,17 +3951,17 @@ select d.* from d left join (select * from b group by 
b.id, b.c_id) s
   on d.a = s.id;
   QUERY PLAN   
 ---
- Merge Left Join
-   Merge Cond: (d.a = s.id)
-   ->  Sort
- Sort Key: d.a
- ->  Seq Scan on d
+ Merge Right Join
+   Merge Cond: (s.id = d.a)
->  Sort
  Sort Key: s.id
  ->  Subquery Scan on s
->  HashAggregate
  Group Key: b.id
  ->  Seq Scan on b
+   ->  Sort
+ Sort Key: d.a
+ ->  Seq Scan on d
 (11 rows)
 
 -- similarly, but keying off a DISTINCT clause
@@ -3970,17 +3970,17 @@ select d.* from d left join (select distinct * from b) s
   on d.a = s.id;
  QUERY PLAN  
 -
- Merge Left Join
-   Merge Cond: (d.a = s.id)
-   ->  Sort
- Sort Key: d.a
- ->  Seq Scan on d
+ Merge Right Join
+   Merge Cond: (s.id = d.a)
->  Sort
  Sort Key: s.id
  ->  Subquery Scan on s
->  HashAggregate
  Group Key: b.id, b.c_id
  ->  Seq Scan on b
+   ->  Sort
+ Sort Key: d.a
+ ->  Seq Scan on d
 (11 rows)
 
 -- check join removal works when uniqueness of the join condition is enforced
diff --git a/src/test/regress/expected/subselect.out 
b/src/test/regress/expected/subselect.out
index de64ca7..0fc93d9 100644
--- a/src/test/regress/expected/subselect.out
+++ b/src/test/regress/expected/subselect.out
@@ -807,27 +807,24 @@ select * from int4_tbl where
 explain (verbose, costs off)
 select * from int4_tbl o where (f1, f1) in
   (select f1, generate_series(1,2) / 10 g from int4_tbl i group by f1);
-  QUERY PLAN  
---
- Hash Join
+   QUERY PLAN   
+
+ Hash Semi Join
Output: o.f1
Hash Cond: (o.f1 = "ANY_subquery".f1)
->  Seq Scan on public.int4_tbl o
  Output: o.f1
->  Hash
  Output: 

Re: [HACKERS] create opclass documentation outdated

2016-02-25 Thread Julien Rouhaud
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 25/02/2016 23:12, Vik Fearing wrote:
> On 02/25/2016 09:13 PM, Julien Rouhaud wrote:
>> Hello,
>> 
>> I just saw that the CREATE OPERATOR CLASS documentation doesn't
>> mention that BRIN indexes also support the STORAGE parameter.
> 
> Good catch!
> 

Thanks

>> Patch attached.
> 
> I think this is the wrong approach; that parenthetical list now 
> represents a full 50% of the available AMs.  I submit it should be 
> removed altogether.
> 

With further inspection, the "Interfacing Extensions To Indexes"
(§35.14) documentation also has a list of AMs supporting the STORAGE
parameter. I believe having at least one page referencing which AMs
support this parameter could be helpful for people who want to
implement new opclass.

I'll send an updated patch as soon we'll have a consensus here :)

- -- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.17 (GNU/Linux)

iQEcBAEBAgAGBQJWz4ZwAAoJELGaJ8vfEpOqijkIAJXzMwx2C6TzekrkWSpNK+qC
eCXuoCQhcLh6Uw80UG8MCnDhubSqG5pJwvjOf9Kih4an1Z0b5lvHORtUMVvepE6T
J8KxOw3s67y7eeXpKl2974l0Y+5ylgGWhjsaQOrKWu5bjYqoXQ+yGfWmUC81gY/C
rIu1X8tUJyYYrNI5ka4+Q0PN7jf//g3aYhqJmOVWzyJDXjS+QIQpMifp/L0+9GXb
yCb0jqlnwpngqSsyAtIPul/j04ZWPs387xQL0+mD0JFA9mtFo6iTg4SyUtOAsR7N
ifgog0j2132z7wjSfm94mIhuVStfogy1pkW3BEkhfxlj4T3S1LxUZPjDEOKBMWQ=
=aUeH
-END PGP SIGNATURE-


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] create opclass documentation outdated

2016-02-25 Thread Vik Fearing
On 02/25/2016 09:13 PM, Julien Rouhaud wrote:
> Hello,
> 
> I just saw that the CREATE OPERATOR CLASS documentation doesn't mention
> that BRIN indexes also support the STORAGE parameter.

Good catch!

> Patch attached.

I think this is the wrong approach; that parenthetical list now
represents a full 50% of the available AMs.  I submit it should be
removed altogether.
-- 
Vik Fearing  +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Password identifiers, protocol aging and SCRAM protocol

2016-02-25 Thread Michael Paquier
On Fri, Feb 26, 2016 at 1:38 AM, Valery Popov  wrote:
> Hi, Michael
>
>
> 23.02.2016 10:17, Michael Paquier пишет:
>>
>> Attached is a set of patches implementing a couple of things that have
>> been discussed, so let's roll in.
>>
>> Those 4 patches are aimed at putting in-core basics for the concept I
>> call password protocol aging, which is a way to allow multiple
>> password protocols to be defined in Postgres, and aimed at easing
>> administration as well as retirement of outdated protocols, which is
>> something that is not doable now in Postgres.
>>
>> The second set of patch 0005~0008 introduces a new protocol, SCRAM.
>> 9) 0009 is the SCRAM authentication itself
>
> The theme with password checking is interesting for me, and I can give
> review for CF for some features.
> I think that review of all suggested features will require a lot of time.
> Is it possible to make subset of patches concerning only password strength
> and its aging?
> The patches you have applied are non-independent. They should be apply
> consequentially one by one.
> Thus the patch 0009 can't be applied without git error  before 0001.
> In this conditions all patches were successfully applied and compiled.
> All tests successfully passed.

If you want to focus on the password protocol aging, you could just
have a look at 0001~0004.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fuzzy substring searching with the pg_trgm extension

2016-02-25 Thread Jeff Janes
On Fri, Jan 29, 2016 at 6:15 AM, Teodor Sigaev  wrote:
>> The behavior of this function is surprising to me.
>>
>> select substring_similarity('dog' ,  'hotdogpound') ;
>>
>>   substring_similarity
>> --
>>   0.25
>>
> Substring search was desined to search similar word in string:
> contrib_regression=# select substring_similarity('dog' ,  'hot dogpound') ;
>  substring_similarity
> --
>  0.75
>
> contrib_regression=# select substring_similarity('dog' ,  'hot dog pound') ;
>  substring_similarity
> --
> 1
> It seems to me that users search words in long string. But I'm agree that
> more detailed explanation needed and, may be, we need to change feature name
> to fuzzywordsearch or something else, I can't imagine how.

If we implement my proposed behavior, and I wanted the existing
behavior instead, I could just do:

select substring_similarity(' dog ' ,  'hotdogpound');

But with the existing implementation, there isn't anything I could to
switch to the one I want instead. So treating is purely as a substring
is more flexible than treating it as a word match.

The reason I like the option of not treating word boundaries as
special in this case is that often in scientific vocabulary, and in
catalog part numbers, people are pretty inconsistent about whether
they included spaces.  "HEK 293", "HEK293", and "HEK-293" could be all
the same thing.  So I like to strip out spaces and punctuation on both
sides of operator.  Of course I can't do that if there are invisible
un-removable spaces on the substring side.

But, It doesn't sound like I am going to win that debate.  Given that,
I don't think we need a different name for the function. I'm fine with
explaining the word-boundary subtlety in the documentation, and
keeping the function name itself simple.

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS][PATCH] Integer overflow in timestamp[tz]_part() and date/time boundaries check

2016-02-25 Thread Vitaly Burovoy
> 

Added to the commitfest 2016-03.

[CF] https://commitfest.postgresql.org/9/540/

-- 
Best regards,
Vitaly Burovoy


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] create opclass documentation outdated

2016-02-25 Thread Julien Rouhaud
Hello,

I just saw that the CREATE OPERATOR CLASS documentation doesn't mention
that BRIN indexes also support the STORAGE parameter.

Patch attached.

Regards
-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org
diff --git a/doc/src/sgml/ref/create_opclass.sgml b/doc/src/sgml/ref/create_opclass.sgml
index 527f33c..63bfdee 100644
--- a/doc/src/sgml/ref/create_opclass.sgml
+++ b/doc/src/sgml/ref/create_opclass.sgml
@@ -232,7 +232,7 @@ CREATE OPERATOR CLASS name [ DEFAUL
  
   The data type actually stored in the index.  Normally this is
   the same as the column data type, but some index methods
-  (currently GiST and GIN) allow it to be different.  The
+  (currently GiST, GIN and BRIN) allow it to be different.  The
   STORAGE clause must be omitted unless the index
   method allows a different type to be used.
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH v5] GSSAPI encryption support

2016-02-25 Thread Robbie Harwood
Michael Paquier  writes:

> On Tue, Feb 16, 2016 at 2:45 AM, Robbie Harwood  wrote:
>> David Steele  writes:
>>> On 2/10/16 4:06 PM, Robbie Harwood wrote:
 Hello friends,

 For your consideration, here is a new version of GSSAPI encryption
 support.  For those who prefer, it's also available on my github:
 https://github.com/frozencemetery/postgres/commit/c92275b6605d7929cda5551de47a4c60aab7179e
>>>
>>> It tried out this patch and ran into a few problems:
>>>
>>> 2) While I was able to apply the patch and get it compiled it seemed
>>> pretty flaky - I was only able to logon about 1 in 10 times on average.
>>>
>>> When not successful the client always output this incomplete message
>>> (without  terminating LF):
>>>
>>> psql: expected authentication request from server, but received
>>>
>>> From the logs I can see the server is reporting EOF from the client,
>>> though the client does not core dump and prints the above message before
>>> exiting.
>>>
>>> I have attached files that contain server logs at DEBUG5 and tcpdump
>>> output for both the success and failure cases.
>>>
>>> Please let me know if there's any more information you would like me to
>>> provide.
>>
>> What I can't tell from looking at your methodology is whether both the
>> client and server were running my patches or no.  There's no fallback
>> here (I'd like to talk about how that should work, with example from
>> v1-v3, if people have ideas).  This means that both the client and the
>> server need to be running my patches for the moment.  Is this your
>> setup?
>
> We need to be careful here, backward-compatibility is critical for
> both the client and the server, or to put it in other words, an
> uptables client should still be able to connect a patched server, and
> vice-versa. This is an area where it is important to not break any
> third-part tool, either using libpq or reimplementing the frontend
> protocol.

Which is why in my introduction to the v4 patch I explicitly mentioned
that this was missing.  I wanted to talk about how this should be
implemented, since I feel that I received no feedback on that design
From last time.  It's not hard to port that design over, if desired.

> So I finally began to dive into your new patch... And after testing
> this is breaking the authentication protocol for GSS. I have been able
> to connect to a server once, but at the second attempt and after
> connection is failing with the following error:
> psql: expected authentication request from server, but received ioltas

Interesting.  I will see if I can find anything.  The capture posted
earlier (thanks David!) suggests that the client doesn't expect the
encrypted data.

I suspect what has happened is a race between the client buffering data
From the socket and the client processing the completion of GSSAPI
authentication.  This kind of issue is why my v1 handled GSSAPI at the
protocol level, not at the transport level.  I think we end up with
encrypted data in the buffer that's supposed to be decrypted, and since
the GSSAPI blob starts with \x00, it doesn't know what to do.

I'll cut a v6 with most of the changes we've talked about here.  It
should address this issue, but I suspect that no one will be happy about
how, since the client essentially needs to "un-read" some data.

As a side note, this would also explain why I can't reproduce the issue,
since I'm running in very low-latency environments (three VMs on my
laptop).

> Also, something that is missing is the parametrization that has been
> discussed the last time this patch was on the mailing list. Where is
> the capacity to control if a client is connecting to a server that is
> performing encryption and downgrade to non-ecrypted connection should
> the server not be able to support it? Enforcing a client to require
> encryption support using pg_hba.conf was as well a good thing. Some
> more infrastructure is needed here, I thought that there was an
> agreement previously regarding that.

This does not match my impression of the discussion, but I would be
happy to be wrong about that since it is less work for me.

> Also, and to make the review a bit easier, it would be good to split
> the patch into smaller pieces (thanks for waiting on my input here,
> this became quite clear after looking at this code). Basically,
> pg_GSS_error is moved to a new file, bringing with it pg_GSS_recvauth
> because the error routine is being used by the new ones you are
> introducing: be_gssapi_write, etc. The split that this patch is doing
> is a bit confusing, all the GSS-related stuff is put within one single
> file:
> - read/write routines
> - authentication routine
> - constants
> - routines for error handling
> Mixing read/write routines with the authentication routine looks
> wrong, because the read-write routines just need to create a
> dependency with for example be-secure.c on the backend. In short,
> 

Re: [HACKERS] GIN data corruption bug(s) in 9.6devel

2016-02-25 Thread Jeff Janes
On Wed, Feb 24, 2016 at 8:51 AM, Teodor Sigaev  wrote:
> Thank you for remembering this problem, at least for me.
>
>>> Well, turns out there's a quite significant difference, actually. The
>>> index sizes I get (quite stable after multiple runs):
>>>
>>> 9.5 : 2428 MB
>>> 9.6 + alone cleanup : 730 MB
>>> 9.6 + pending lock : 488 MB
>
> Interesting, I don't see why alone_cleanup and pending_lock are so differ.
> I'd like to understand that, but does somebody have an good theory?

Under my patch, anyone who wanted to do a clean up and detected
someone else was doing one would wait for the concurrent one to end.
(This is more consistent with the existing behavior, I just made it so
they don't do any damage while they wait.)

Under your patch, if a backend wants to do a clean up and detects
someone else is already doing one, it would just skip the clean up and
proceed on with whatever it was doing.  This allows one process
(hopefully a vacuum, but maybe a user backend) to get pinned down
indefinitely, as other processes keep putting stuff onto the end of
the pending_list with no throttle.

Since the freespace recycling only takes place once the list is
completely cleaned, allowing some processes to add to the end while
one poor process is trying to clean can lead to less effective
recycling.

That is my theory, anyway.

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS]WIP: Covering + unique indexes.

2016-02-25 Thread Jeff Janes
On Thu, Feb 11, 2016 at 8:46 AM, Anastasia Lubennikova
 wrote:
> 02.02.2016 15:50, Anastasia Lubennikova:

>
> As promised, here's the new version of the patch "including_columns_4.0".
> I fixed all issues except some points mentioned below.

Thanks for the update patch.  I get a compiler warning:

genam.c: In function 'BuildIndexValueDescription':
genam.c:259: warning: unused variable 'tupdesc'

Also, I can't create a primary key INCLUDING columns directly:

jjanes=# create table foobar (a int, b int, c int);
jjanes=# alter table foobar add constraint foobar_pkey primary key
(a,b) including (c);
ERROR:  syntax error at or near "including"

But I can get there using a circuitous route:

jjanes=# create unique index on foobar (a,b) including (c);
jjanes=# alter table foobar add constraint foobar_pkey primary key
using index foobar_a_b_c_idx;

The description of the table's index knows to include the including column:

jjanes=# \d foobar
Table "public.foobar"
 Column |  Type   | Modifiers
+-+---
 a  | integer | not null
 b  | integer | not null
 c  | integer |
Indexes:
"foobar_pkey" PRIMARY KEY, btree (a, b) INCLUDING (c)


Since the machinery appears to all be in place to have primary keys
with INCLUDING columns, it would be nice if the syntax for adding
primary keys allowed one to implement them directly.

Is this something or future expansion, or could it be added at the
same time as the main patch?

I think this is something it would be pretty frustrating for the user
to be unable to do right from the start.

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance degradation in commit 6150a1b0

2016-02-25 Thread Simon Riggs
On 24 February 2016 at 23:26, Amit Kapila  wrote:

> From past few weeks, we were facing some performance degradation in the
> read-only performance bench marks in high-end machines.  My colleague
> Mithun, has tried by reverting commit ac1d794 which seems to degrade the
> performance in HEAD on high-end m/c's as reported previously[1], but still
> we were getting degradation, then we have done some profiling to see what
> has caused it  and we found that it's mainly caused by spin lock when
> called via pin/unpin buffer and then we tried by reverting commit 6150a1b0
> which has recently changed the structures in that area and it turns out
> that reverting that patch, we don't see any degradation in performance.
> The important point to note is that the performance degradation doesn't
> occur every time, but if the tests are repeated twice or thrice, it
> is easily visible.
>

Not seen that on the original patch I posted. 6150a1b0 contains multiple
changes to the lwlock structures, one written by me, others by Andres.

Perhaps we should revert that patch and re-apply the various changes in
multiple commits so we can see the differences.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


[HACKERS] FDW handling count(*) through AnalyzeForeignTable or other constant time push-down

2016-02-25 Thread Gabe F. Rudy
Hey all,

I'm building a FDW around a column-store backend (similar to CStore but for 
genomic data!).

I have tables in the billions of rows, and have a common query pattern of 
asking for the table size (i.e. SELECT COUNT(*) FROM big_fdw_table; ).

This is a read-optimized system in which I know in constant time the exact 
dimensions of the table.

Is there any way to convince Postgres FDW to leverage the analyze row counts or 
even the "double* totalRowCount" returned from the AcquireSampleRows callback 
from my AnalyzeForeignTable function so that it does not do a full-table scan 
for a COUNT(*) etc?

My current fallback is to export a specialized function that returns the table 
row count for a given FDW table, but that then leaks into the user-application 
driving these queries.

Thanks in advance!
Gabe

Gabe Rudy | VP Product & Engineering | Golden Helix, Inc.



Re: [HACKERS] GIN data corruption bug(s) in 9.6devel

2016-02-25 Thread Tomas Vondra

Hi,

On 02/25/2016 05:32 PM, Teodor Sigaev wrote:

Well, turns out there's a quite significant difference, actually. The
index sizes I get (quite stable after multiple runs):

9.5 : 2428 MB
9.6 + alone cleanup : 730 MB
9.6 + pending lock : 488 MB


In attach modified alone_cleanup patch which doesn't break cleanup
process as it does pending_lock patch but attached patch doesn't touch a
lock management.

Tomas. if you can, pls, repeat test with this patch. If not, I will try
to do it, but later.


I'll do that once the system I used for that gets available - right now 
it's running other benchmarks.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request for Code Review: BPGSQL

2016-02-25 Thread Jacek Wielemborek
W dniu 11.02.2016 o 14:26, Jacek Wielemborek pisze:
> W dniu 11.02.2016 o 14:06, Rich Jones pisze:
>> Hello, team!
>>
>> I am writing on behalf of the BPGSQL Project [1] to request a code audit
>> from a core PGSQL team member.
>>
>> The current maintainer is worried about the security of the code, and is
>> considering closing the project unless it can be properly reviewed [2]. As
>> a project living downstream[3] of that client library, I'd obviously much
>> rather see that project get reviewed rather than see it die.
>>
>> Would anybody here be so kind as to volunteer to give BPGSQL a code review
>> from an upstream developer's perspective? It would have a lot of value
>> downstream users who want to use Postgres on Amazon RDS for serverless
>> applications, and I'm sure in plenty of other places.
>>
>> Thanks very much!,
>> Rich Jones
>>
>> [1] https://github.com/d33tah/bpgsql
>> [2] https://github.com/d33tah/bpgsql/issues/7
>> [3] https://github.com/Miserlou/django-zappa/issues/3
>>
> 
> Hello,
> 
> Thanks Rich, I second the request for a code review.
> 
> I felt I'd add that this is a 1500-line pure-Python PostgreSQL client
> module that I inherited after Barry Pederson. After I realized how
> execute() is implemented, I have my worries and I'd rather not risk
> making my users vulnerable.
> 
> I'd be really grateful if somebody who knows a bit of Python and the
> guts of PostgreSQL could speak up on this one.
> 
> Cheers,
> d33tah
> 

Hello,

I just unsubscribed from the mailing list so please CC next time you
post a reply to this thread.

Cheers,
d33tah



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Password identifiers, protocol aging and SCRAM protocol

2016-02-25 Thread Valery Popov

Hi, Michael


23.02.2016 10:17, Michael Paquier пишет:

Attached is a set of patches implementing a couple of things that have
been discussed, so let's roll in.

Those 4 patches are aimed at putting in-core basics for the concept I
call password protocol aging, which is a way to allow multiple
password protocols to be defined in Postgres, and aimed at easing
administration as well as retirement of outdated protocols, which is
something that is not doable now in Postgres.

The second set of patch 0005~0008 introduces a new protocol, SCRAM.
9) 0009 is the SCRAM authentication itself
The theme with password checking is interesting for me, and I can give 
review for CF for some features.

I think that review of all suggested features will require a lot of time.
Is it possible to make subset of patches concerning only password 
strength and its aging?
The patches you have applied are non-independent. They should be apply 
consequentially one by one.

Thus the patch 0009 can't be applied without git error  before 0001.
In this conditions all patches were successfully applied and compiled.
All tests successfully passed.

The first 4 patches obviously are the core portion that I would like
to discuss about in this CF, as they put in the base for the rest, and
will surely help Postgres long-term. 0005~0008 are just refactoring
patches, so they are quite simple. 0009 though is quite difficult, and
needs careful review because it manipulates areas of the code where it
is not necessary to be an authenticated user, so if there are bugs in
it it would be possible for example to crash down Postgres just by
sending authentication requests.


--
Regards,
Valery Popov
Postgres Professional http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN data corruption bug(s) in 9.6devel

2016-02-25 Thread Teodor Sigaev

Well, turns out there's a quite significant difference, actually. The
index sizes I get (quite stable after multiple runs):

9.5 : 2428 MB
9.6 + alone cleanup : 730 MB
9.6 + pending lock : 488 MB


In attach modified alone_cleanup patch which doesn't break cleanup process as it 
does pending_lock patch but attached patch doesn't touch a lock management.


Tomas. if you can, pls, repeat test with this patch. If not, I will try to do 
it, but later.


--
Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW: http://www.sigaev.ru/


gin_alone_cleanup-3.1.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for N synchronous standby servers - take 2

2016-02-25 Thread Masahiko Sawada
Attached latest patch includes document patch.

> When I changed s_s_names to 'hoge*' and reloaded the configuration file,
> the server crashed unexpectedly with the following error message.
> This is obviously a bug.

Fixed.

>   - allows any byte except a double quote in double-quoted
>representation. A double-quote just after a delimiter can open
>quoted representation.

No. double quote is also allowed in double-quoted representation using
by two double-quotes.
if s_s_names = '"node""hoge"' then standby name will be 'node"hoge'.

>
> I have no problem with it. The attached new sample parser does
> so.
>
> By the way, your parser also complains for an example I've seen
> somewhere upthread "1[2,3,4]". This is because '2', '3' and '4'
> are regarded as INT, not NAME. Whether a sequence of digits is a
> prefix number of a list or a host name cannot be identified until
> reading some following characters. So my previous test.l defined
> NAME_OR_INTEGER and it is distinguished in the grammar side to
> resolve this problem.
>
> If you want them identified in the lexer side, it should do
> looking-forward as {prefix} in the attached
> test.l does. This makes the lexer a bit complex but in contrast
> test.y simpler. The test.l, test.y attached got refactored but .l
> gets a bit tricky..

I think that lexer can pass both INT and NAME as char* to parser, and
then parser regards them as integer or char*.
It would be more simple.
Thoughts?

Thank you for giving lexer and parser example but I'm not sure that it
makes thing more easier.
It seems to make thing more complex.

Attached patch handles parameter using similar way as postgres parses SQL.
Please having a look it and give me feedbacks.

Regards,

--
Masahiko Sawada


000_multi_sync_replication_v11.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning

2016-02-25 Thread Jean-Pierre Pelletier
Why not based it on "Exclusion Constraint" ?

Most discussions as of late seems to focus on Range overlaps
which appeal (I would think) is that it supports both "equality" and
"overlaps",
two popular partitioning schemes.

"Equality" as in "value1 = value2" can be implemented with "range
overlaps"
as "range(value1,value) = range(value,value2)".

I would think that Partitioning schemes can be Declarative, Efficient and
not restricted to Equality and Overlaps
as long as all partitions (of a partitioned table) are using a single
partitioning definition expressed as:
- An Immutable Expression on tuple columns, in the simplest case a single
column
- An Operator, in the simplest case, "equality"

That seems very close to the semantic of "Constraint Exclusion" as
described here:
http://thoughts.davisjeff.com/2010/09/25/exclusion-constraints-are-general
ized-sql-unique/

If partitioning could be based on EC, it would bring these additional
benefits:
- The choice of operator as long as it is boolean. commutative and
Indexable
- The use of Expression/Function and not just bare columns

Jean-Pierre Pelletier


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning

2016-02-25 Thread Jean-Pierre Pelletier
Why not based it on "Exclusion Constraint" ?

Most discussions as of late seems to focus on Range overlaps which appeal
(I would think) is that it supports both "equality" and "overlaps", two
popular partitioning schemes.

"Equality" as in "value1 = value2" can be implemented with "range
overlaps"
as "range(value1,value) = range(value,value2)".

I would think that Partitioning schemes can be Declarative, Efficient and
not restricted to Equality and Overlaps as long as all partitions (of a
partitioned table) are using a single partitioning definition expressed
as:
- An Immutable Expression on tuple columns, in the simplest case a single
column
- An Operator, in the simplest case, "equality"

That seems very close to the semantic of "Constraint Exclusion" as
described here:
http://thoughts.davisjeff.com/2010/09/25/exclusion-constraints-are-general
ized-sql-unique/

If partitioning could be based on EC, it would bring these additional
benefits:
- The choice of operator as long as it is boolean. commutative and
Indexable
- The use of Expression/Function and not just bare columns

Jean-Pierre Pelletier


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH v5] GSSAPI encryption support

2016-02-25 Thread David Steele
On 2/25/16 2:08 AM, Michael Paquier wrote:
> On Wed, Feb 24, 2016 at 7:12 PM, Robbie Harwood  wrote:
>>
>> Not that I can immediately see.  As long as the client and server are
>> both patched, everything should work.  My process is the same as with
>> previous versions of this patchset [0], and though I'm using FreeIPA
>> there is no reason it shouldn't work with any other KDC (MIT, for
>> instance[1]) provided the IPA calls are converted.
> 
> I used a custom krb5kdc set up manually, and all my connection
> attempts are working on HEAD, not with your patch (both client and
> server patched).

I've got the same setup with the same results.

>> I am curious, though - I haven't changed any of the authentication code
>> in v4/v5 from what's in ~master, so how often can you log in using
>> GSSAPI using master?
> 
> My guess is that there is something not been correctly cleaned up when
> closing the connection. The first attempt worked for me, not after.

I was able to get in again after a number of failed attempts, though the
number varied.

-- 
-David
da...@pgmasters.net



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] get current log file

2016-02-25 Thread Tom Lane
Robert Haas  writes:
> On Thu, Feb 25, 2016 at 1:15 AM, Euler Taveira >> wrote:
>>> To pass last_syslogger_file_time, we have 2 solutions: 1, add a
>>> global variable to record last_syslogger_file_time which shared by
>>> backends and syslogger, so backends can get last_syslogger_file_time
>>> very easily; 2 syslogger process send last_syslogger_file_time to pgstat
>>> process when last_syslogger_file_time changes, just as other auxiliary
>>> processes send stat  message to pgstat process, and  pgstat process will
>>> write  last_syslogger_file_time into stat file so that backend can
>>> get last_syslogger_file_time via reading this stat file.

>> I prefer (1) because (i) logfile name is not statistics and (ii) stats
>> collector could not respond in certain circumstances (and even discard
>> some messages).

> (1) seems like a bad idea, because IIUC, the syslogger process doesn't
> currently touch shared memory.  And in fact, shared memory can be
> reset after a backend exits abnormally, but the syslogger (alone among
> all PostgreSQL processes other than the postmaster) lasts across
> multiple such resets.

Yes, allowing the syslogger to depend on shared memory is right out.
I don't particularly care for having it assume the stats collector
exists, either -- in fact, given the current initialization order
it's physically impossible for syslogger to send to stats collector
because the former is started before the latter's communication
socket is made.

I haven't actually heard a use-case for exposing the current log file name
anyway.  But if somebody convinced me that there is one, I should think
that the way to implement it is to report the actual *name*, not
components out of which you could reconstruct the name only by assuming
that you know everything about the current syslogger configuration and
the code that builds log file names.  That's obviously full of race
conditions and code-maintenance hazards.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] The plan for FDW-based sharding

2016-02-25 Thread Bruce Momjian
On Thu, Feb 25, 2016 at 01:53:12PM +0900, Michael Paquier wrote:
> > Well, as far as I know XC doesn't support data redistribution between
> > nodes and I saw good benchmarks of that, as well as XL.
> 
> XC does support that in 1.2 with a very basic approach (coded that
> years ago), though it takes an exclusive lock on the table involved.
> And actually I think what I did in this case really sucked, the effort
> was centralized on the Coordinator to gather and then redistribute the
> tuples, at least tuples that do not need to move were not moved at
> all.

Yes, there is a lot of complexity involved in sending results between
nodes.

> >> Once that is done, we can see what workloads it covers and
> >> decide if we are willing to copy the volume of code necessary
> >> to implement all supported Postgres XC or XL workloads.
> >> (The Postgres XL license now matches the Postgres license,
> >> http://www.postgres-xl.org/2015/07/license-change-and-9-5-merge/.
> >> Postgres XC has always used the Postgres license.)
> 
> Postgres-XC used the GPL license first, and has moved to PostgreSQL
> license exactly to allow Postgres core to reuse it later on if needed.

Ah, yes, I remember that now.  Thanks.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: index-only scans with partial indexes

2016-02-25 Thread Tomas Vondra

Hi,

On 02/25/2016 11:56 AM, Kyotaro HORIGUCHI wrote:

Hello, Tomas. my cerebral cortext gets squeezed by jumping from
parser to planner.


LOL



At Wed, 24 Feb 2016 01:13:22 +0100, Tomas Vondra  wrote 
in <56ccf5a2.5040...@2ndquadrant.com>

Hi,

On 12/06/2015 11:48 PM, Tomas Vondra wrote:

/*
 * Frequently, there will be no partial indexes, so first check to
 * make sure there's something useful to do here.
 */
have_partial = false;
foreach(lc, rel->indexlist)
{
  IndexOptInfo *index = (IndexOptInfo *) lfirst(lc);

  /*
   * index rinfos are the same to baseristrict infos for non-partial
   * indexes
   */
  index->indrinfos = rel->baserestrictinfo;

  if (index->indpred == NIL)
continue;  /* ignore non-partial indexes */

  if (index->predOK)
continue;  /* don't repeat work if already proven OK */

  have_partial = true;
  break;
}


Attached is a v6 of the patch, which is actually the version
submitted by Kyotaro-san on 2015/10/8 rebased to current master and
with two additional changes.


This relies on the fact I believe that no indexrelinfos are
shared among any two baserels. Are you OK with that?


I'm not sure what this refers to? You mean the fact that an index will 
have one IndexOptInfo instance for each baserel? Yes, that seems fine to me.





Firstly, I've removed the "break" from the initial foreach loop in
check_partial_indexes(). As explained in the previous message, I
believe this was a bug in the patch.


I agreed. The break is surely a bug. (the regression didn't find
it though..)


Secondly, I've tried to improve the comments to explain a bit better
what the code is doing.


In check_partial_indexes, the following commnet.


   /*
* Update restrictinfo for this index by excluding clauses that
* are implied by the index predicates. We first quickly check if
* there are any implied clauses, and if we found at least one
* we do the actual work. This way we don't need the construct
* a copy of the list unless actually needed.
*/


Is "need the construct" a mistake of "need to construct" ?


match_restriction_clauses_to_index is called only from
create_index_paths, and now the former doesn't use its first
parameter rel. We can safely remove the useless parameter.

-  match_restriction_clauses_to_index(RelOptInfo *rel, IndexOptInfo *index,
- IndexClauseSet *clauseset)
+  match_restriction_clauses_to_index(IndexOptInfo *index,
+ IndexClauseSet *clauseset)

I find no other problem and have no objection to this. May I mark
this "Ready for committer" after fixing them?


OK. Do you want me to do the fixes, or will you do that?

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: index-only scans with partial indexes

2016-02-25 Thread Kyotaro HORIGUCHI
Hello, Tomas. my cerebral cortext gets squeezed by jumping from
parser to planner.

At Wed, 24 Feb 2016 01:13:22 +0100, Tomas Vondra  
wrote in <56ccf5a2.5040...@2ndquadrant.com>
> Hi,
> 
> On 12/06/2015 11:48 PM, Tomas Vondra wrote:
> >/*
> > * Frequently, there will be no partial indexes, so first check to
> > * make sure there's something useful to do here.
> > */
> >have_partial = false;
> >foreach(lc, rel->indexlist)
> >{
> >  IndexOptInfo *index = (IndexOptInfo *) lfirst(lc);
> >
> >  /*
> >   * index rinfos are the same to baseristrict infos for non-partial
> >   * indexes
> >   */
> >  index->indrinfos = rel->baserestrictinfo;
> >
> >  if (index->indpred == NIL)
> >continue;  /* ignore non-partial indexes */
> >
> >  if (index->predOK)
> >continue;  /* don't repeat work if already proven OK */
> >
> >  have_partial = true;
> >  break;
> >}
> 
> Attached is a v6 of the patch, which is actually the version submitted
> by Kyotaro-san on 2015/10/8 rebased to current master and with two
> additional changes.

This relies on the fact I believe that no indexrelinfos are
shared among any two baserels. Are you OK with that?

> Firstly, I've removed the "break" from the initial foreach loop in
> check_partial_indexes(). As explained in the previous message, I
> believe this was a bug in the patch.

I agreed. The break is surely a bug. (the regression didn't find
it though..)

> Secondly, I've tried to improve the comments to explain a bit better
> what the code is doing.

In check_partial_indexes, the following commnet.

>   /*
>* Update restrictinfo for this index by excluding clauses that
>* are implied by the index predicates. We first quickly check if
>* there are any implied clauses, and if we found at least one
>* we do the actual work. This way we don't need the construct
>* a copy of the list unless actually needed.
>*/

Is "need the construct" a mistake of "need to construct" ?


match_restriction_clauses_to_index is called only from
create_index_paths, and now the former doesn't use its first
parameter rel. We can safely remove the useless parameter.

-  match_restriction_clauses_to_index(RelOptInfo *rel, IndexOptInfo *index,
- IndexClauseSet *clauseset)
+  match_restriction_clauses_to_index(IndexOptInfo *index,
+ IndexClauseSet *clauseset)

I find no other problem and have no objection to this. May I mark
this "Ready for committer" after fixing them?

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for N synchronous standby servers - take 2

2016-02-25 Thread Kyotaro HORIGUCHI
Hello,

At Wed, 24 Feb 2016 18:01:59 +0900, Masahiko Sawada  
wrote in