Re: [HACKERS] Getting server crash after running sqlsmith

2017-05-22 Thread tushar

On 03/29/2017 12:06 AM, Tom Lane wrote:

Hm ... I don't see a crash here, but I wonder whether you have parameters
set that would cause this query to be run as a parallel query?  Because
pg_rotate_logfile() is marked as parallel-safe in pg_proc, which seems
probably insane.
Well, I am able to see a crash .  Enable "logging_collector=on" in 
postgresql.conf file / restart the server and fire below sql query - 5 
or 6 times


select
  80 as c0,
  pg_catalog.pg_backend_pid() as c1,
  68 as c2,
  subq_1.c0 as c3,
  subq_1.c0 as c4
from
  (select
ref_0.specific_schema as c0
  from
information_schema.role_routine_grants as ref_0,
lateral (select
  ref_0.grantor as c0,
  50 as c1
from
  information_schema.routines as ref_1
where (63 = 86)
  or (pg_catalog.pg_advisory_lock(
  cast(ref_1.result_cast_datetime_precision as 
integer),

  cast(pg_catalog.bttidcmp(
cast(null as tid),
cast(null as tid)) as integer)) is NULL)
limit 143) as subq_0
  where pg_catalog.pg_rotate_logfile() is NULL) as subq_1
where 50 <> 45;

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Create subscription with `create_slot=false` and incorrect slot name

2017-05-22 Thread Euler Taveira
2017-05-22 17:52 GMT-03:00 Dmitry Dolgov <9erthali...@gmail.com>:

> Maybe this question was already raised before, but I couldn't find a
> discussion. When I'm creating a subscription with `create_slot=false` looks
> like it's possible to pass a slot name with invalid characters. In this
> particular case both publisher and subscriber were on the same machine:
>
> =# CREATE SUBSCRIPTION test_sub CONNECTION 'dbname=db host=host
> port=port user=user' PUBLICATION test_pub WITH (create_slot=false,
> slot_name='test slot');
> NOTICE:  0: synchronized table states
> LOCATION:  CreateSubscription, subscriptioncmds.c:443
> CREATE SUBSCRIPTION
> Time: 5.887 ms
>
> The command succeed even if slot_name is invalid. That's because slot_name
isn't validated. ReplicationSlotValidateName() should be called in
parse_subscription_options() to avoid a pilot error. IMHO we should prevent
a future error (use of invalid slot name).


> Of course `test slot` doesn't exist, because I can't create a slot with
> incorrect name. And consequently I can't drop this subscription:
>
> =# DROP SUBSCRIPTION test_sub;
> ERROR:  XX000: could not drop the replication slot "test slot" on
> publisher
> DETAIL:  The error was: ERROR:  replication slot name "test slot"
> contains invalid character
> HINT:  Replication slot names may only contain lower case letters,
> numbers, and the underscore character.
> LOCATION:  DropSubscription, subscriptioncmds.c:947
> Time: 2.615 ms
>
> Indeed you can drop the subscription. There are two details: (i)
subscription should be disabled and (ii) slot name can't be set.

bar=# drop subscription sub1;
ERROR:  could not drop the replication slot "does_not_exist" on publisher
DETAIL:  The error was: ERROR:  replication slot "does_not_exist" does not
exist
bar=# alter subscription sub1 set (slot_name = NONE);
ERROR:  cannot set slot_name = NONE for enabled subscription
bar=# alter subscription sub1 disable;
ALTER SUBSCRIPTION
bar=# drop subscription sub1;
ERROR:  could not drop the replication slot "does_not_exist" on publisher
DETAIL:  The error was: ERROR:  replication slot "does_not_exist" does not
exist
bar=# alter subscription sub1 set (slot_name = NONE);
ALTER SUBSCRIPTION
bar=# drop subscription sub1;
DROP SUBSCRIPTION

Should we add a hint for 'could not drop the replication slot' message?


-- 
   Euler Taveira   Timbira -
http://www.timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento



Re: [HACKERS] psql - add special variable to reflect the last query status

2017-05-22 Thread Fabien COELHO


Hello Pavel,


I have not any other comments. The implementation is trivial. [...]


Indeed.


I'll mark this patch as ready for commiters.


Thanks for the review.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Error-like LOG when connecting with SSL for password authentication

2017-05-22 Thread Vaishnavi Prabakaran
On Mon, May 22, 2017 at 5:10 PM, Michael Paquier 
wrote:

> If the protocol version is SSL
> 3.0 or TLS 1.0, this result code is returned only if a closure alert
> has occurred in the protocol, i.e. if the connection has been closed
> cleanly. Note that in this case SSL_ERROR_ZERO_RETURN does not
> necessarily indicate that the underlying transport has been closed.
>

I guess this error code exist even for SSL2 protocol, In that case, don't
we need to keep the current code for this error code?

Regards,
Vaishnavi,
Fujitsu Australia.


Re: [HACKERS] Some thoughts about SCRAM implementation

2017-05-22 Thread Michael Paquier
On Mon, Apr 10, 2017 at 6:39 PM, Álvaro Hernández Tortosa
 wrote:
> - I think channel binding support should be added. SCRAM brings security
> improvements over md5 and other simpler digest algorithms. But where it
> really shines is together with channel binding. This is the only method to
> prevent MITM attacks. Implementing it should not very difficult. There are
> several possible channel binding mechanisms, but the mandatory and probably
> more typical one is 'tls-unique' which basically means getting the byte
> array from the TLSfinish() message and comparing it with the same data sent
> by the client. That's more or less all it takes to implement it. So I would
> go for it.

I was just looking at that during a long flight, and OpenSSL offers
SSL_get_finished() and SSL_get_peer_finished() to get the Finished
message data. So that's a matter of saving it in the client, encode it
in base64 (the spec is clear about that) and send the data as part of
the first challenge response to the server that just compares both
contents. Of course the protocol list and the first client message
need to be extended as well, but most of the facility is already
there. The spec is also clear about the lines to follow should the
client and/or server be built without OpenSSL (aka no channel binding
support).
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improve logical decoding error message (was wal_level > WAL_LEVEL_LOGICAL)

2017-05-22 Thread Alvaro Herrera
Neha Khatri wrote:
> On Tue, May 23, 2017 at 10:26 AM, Michael Paquier  > There is no wal_level higher than logical, so the current sense looks
> > perfectly fine to me.
> 
> If there is no wal_level higher than logical, should the following error
> message indicate to set it >= logical.
> 
>  select * from
> pg_create_logical_replication_slot('regression_slot','test_decoding');
>  ERROR:  logical decoding requires wal_level >= logical

I think it's purposefully ambiguous to cover a possible future
extension.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improve logical decoding error message (was wal_level > WAL_LEVEL_LOGICAL)

2017-05-22 Thread Neha Khatri
On Tue, May 23, 2017 at 10:26 AM, Michael Paquier  wrote:

> On Tue, May 23, 2017 at 8:08 AM, Neha Khatri 
> wrote:
> > The Logical Decoding example in the documentation says:
> >
> >   "Before you can use logical decoding, you must set wal_level to logical
> > and max_replication_slots to at least 1."
> >
> > But above error message is not exactly consistent with this
> documentation.
> > Would it make sense to keep the error message and the documentation
> > consistent like the attached.
>
> There is no wal_level higher than logical, so the current sense looks
> perfectly fine to me.
>

If there is no wal_level higher than logical, should the following error
message indicate to set it >= logical.

 select * from
pg_create_logical_replication_slot('regression_slot','test_decoding');
 ERROR:  logical decoding requires wal_level >= logical

Regards,
Neha


[HACKERS] pg_upgrade translation

2017-05-22 Thread Alvaro Herrera
Translatability was turned on for pg_upgrade on pg10, but it needs some
work.  The choices are we fix it now, or we revert the addition of NLS
there and we do it for PG11.

I think the ugliest one is to change the messages about "the %s server",
which are about a dozen -- things like:
msgid "pg_ctl failed to start the %s server, or connection failed\n"
where the %s is either "new" or "old".

These need be to be split in two messages, and I think I'd replace the
"old" with "source" and "new" with "target".  Something like this:

*** a/src/bin/pg_upgrade/server.c
--- b/src/bin/pg_upgrade/server.c
***
*** 285,293  start_postmaster(ClusterInfo *cluster, bool throw_error)
   PQerrorMessage(conn));
if (conn)
PQfinish(conn);
!   pg_fatal("could not connect to %s postmaster started with the 
command:\n"
!"%s\n",
!CLUSTER_NAME(cluster), cmd);
}
PQfinish(conn);
  
--- 285,297 
   PQerrorMessage(conn));
if (conn)
PQfinish(conn);
!   if (cluster == _cluster)
!   pg_fatal("could not connect to source postmaster 
started with the command:\n"
!"%s\n", cmd);
!   else
!   pg_fatal("could not connect to target postmaster 
started with the command:\n"
!"%s\n", cmd);
! 
}
PQfinish(conn);



Aye or nay?

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improve logical decoding error message (was wal_level > WAL_LEVEL_LOGICAL)

2017-05-22 Thread Michael Paquier
On Tue, May 23, 2017 at 8:08 AM, Neha Khatri  wrote:
> The Logical Decoding example in the documentation says:
>
>   "Before you can use logical decoding, you must set wal_level to logical
> and max_replication_slots to at least 1."
>
> But above error message is not exactly consistent with this documentation.
> Would it make sense to keep the error message and the documentation
> consistent like the attached.

There is no wal_level higher than logical, so the current sense looks
perfectly fine to me.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] translatable string fixes

2017-05-22 Thread Alvaro Herrera
It took me a very long time to figure out how to translate these 9.6-new
strings in the AM validate routines:

msgid "gin operator family \"%s\" contains support procedure %s with cross-type 
registration"

The problem I had was that the term "cross-type registration" is not
used anywhere else, it's not clear what it means, and (worst from my
POV) I couldn't think of any decent phrasing in Spanish for it.  After
staring at the code for a while, I translated them roughly as:

"gin operator family %s contains support procedure %s registered with differing 
types"

which I think is a tad clearer ... but as a user confronted with such a
message, I would be at a complete loss on what to do about it.

Maybe we can use this phrasing:
"gin operator family %s contains support procedure %s registered with different 
left and right types"


The other complaint I have about this one and also other amvalidate
functions is the hardcoded AM name, so it's actually one string per AM,
which is annoying (a total of twenty-something messages which are
actually only four or five different ones).  Ignoring the second part of
the phrase now, we could use this:
  "operator family %s of access method %s contains support procedure %s with 
cross-type registration"

Thoughts?

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PG10 Crash-safe and replicable Hash Indexes and UNIQUE

2017-05-22 Thread Chapman Flack
On 05/22/17 18:39, Alvaro Herrera wrote:
> Chapman Flack wrote:
>> CREATE INDEX ON foo USING btree ( bar, baz ALSO quux );
>
> INCLUDING:
> https://www.postgresql.org/message-id/56168952.4010...@postgrespro.ru

I'd buy that.

-Chap


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Improve logical decoding error message (was wal_level > WAL_LEVEL_LOGICAL)

2017-05-22 Thread Neha Khatri
On Mon, May 22, 2017 at 11:08 PM, Neha Khatri  wrote:

> As per my understabding, current postgres server supports only three
> values for wal_level i.e. 'minimal' , 'replica' or 'logical'. But
> following error message brought to notice that there are various code
> spots that try to look for wal_level >= WAL_LEVEL_LOGICAL:
>
>   select * from pg_create_logical_replication_slot('regression_slot',
> 'test_decoding');
>   ERROR:  logical decoding requires wal_level >= logical
>

The Logical Decoding example in the documentation says:

  "Before you can use logical decoding, you must set wal_level

 to logical and max_replication_slots

to
at least 1."

But above error message is not exactly consistent with this documentation.
Would it make sense to keep the error message and the documentation
consistent like the attached.

Regards,
Neha


improve_err_message.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PG10 Crash-safe and replicable Hash Indexes and UNIQUE

2017-05-22 Thread Alvaro Herrera
Chapman Flack wrote:

> That was what gave me the idea in the first place, which then
> I realized could be more generally useful. If I could say
> something like
> 
> CREATE INDEX ON foo USING btree ( bar, baz ALSO quux );
> 
> so that only bar and baz are compared in insertion and search,
> but quux is along for the ride and available to index-only scans,

INCLUDING:
https://www.postgresql.org/message-id/56168952.4010...@postgrespro.ru

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ALTER PUBLICATION materializing the list of tables

2017-05-22 Thread Jeff Janes
If I create a publication FOR ALL TABLES and then change my mind, the only
thing I can do is drop the publication and recreate it.

Since "ALTER PUBLICATION name SET TABLE" allows you to replace the entire
table list, shouldn't it also let you change from the dynamic FOR ALL
TABLES to a static specific list?

Cheers,

Jeff


[HACKERS] ALTER PUBLICATION documentation

2017-05-22 Thread Jeff Janes
"The first variant of this command listed in the synopsis can change all of
the publication properties specified in CREATE PUBLICATION
."

That referenced first variant no longer exists.  I don't if that should
just be removed, or reworked to instead describe "ALTER PUBLICATION name
SET".

Cheers,

Jeff


[HACKERS] Create subscription with `create_slot=false` and incorrect slot name

2017-05-22 Thread Dmitry Dolgov
Hi

Maybe this question was already raised before, but I couldn't find a
discussion. When I'm creating a subscription with `create_slot=false` looks
like it's possible to pass a slot name with invalid characters. In this
particular case both publisher and subscriber were on the same machine:

=# CREATE SUBSCRIPTION test_sub CONNECTION 'dbname=db host=host
port=port user=user' PUBLICATION test_pub WITH (create_slot=false,
slot_name='test slot');
NOTICE:  0: synchronized table states
LOCATION:  CreateSubscription, subscriptioncmds.c:443
CREATE SUBSCRIPTION
Time: 5.887 ms

Of course `test slot` doesn't exist, because I can't create a slot with
incorrect name. And consequently I can't drop this subscription:

=# DROP SUBSCRIPTION test_sub;
ERROR:  XX000: could not drop the replication slot "test slot" on
publisher
DETAIL:  The error was: ERROR:  replication slot name "test slot"
contains invalid character
HINT:  Replication slot names may only contain lower case letters,
numbers, and the underscore character.
LOCATION:  DropSubscription, subscriptioncmds.c:947
Time: 2.615 ms

=# CREATE SUBSCRIPTION test_sub CONNECTION 'dbname=db host=host
port=port user=user' PUBLICATION test_pub WITH (create_slot=false,
slot_name='test slot');
ERROR:  42710: subscription "test_sub" already exists
LOCATION:  CreateSubscription, subscriptioncmds.c:344
Time: 0.492 ms

Is it an issue or I'm doing something wrong?


Re: [HACKERS] psql - add special variable to reflect the last query status

2017-05-22 Thread Pavel Stehule
2017-05-22 21:33 GMT+02:00 Fabien COELHO :

>
> Please find attached a v2 which hopefully takes into account all your
>>> points above.
>>>
>>> Open question: should it gather more PQerrorResultField, or the two
>>> selected one are enough? If more, which should be included?
>>>
>>
>>
>> I don't think so it is necessary. No in this moment. ERROR_CODE and
>> ERROR_MESSAGE are fundamental - and if we add other, then we should to add
>> all. Has not sense to add only some.
>>
>
> Ok. I'm fine with stopping at CODE & MESSAGE.


I have not any other comments. The implementation is trivial. I rerun all
tests and tests passed.

I'll mark this patch as ready for commiters.

Regards

Pavel

>
>
> --
> Fabien.
>


Re: [HACKERS] psql - add special variable to reflect the last query status

2017-05-22 Thread Fabien COELHO



Please find attached a v2 which hopefully takes into account all your
points above.

Open question: should it gather more PQerrorResultField, or the two
selected one are enough? If more, which should be included?



I don't think so it is necessary. No in this moment. ERROR_CODE and
ERROR_MESSAGE are fundamental - and if we add other, then we should to add
all. Has not sense to add only some.


Ok. I'm fine with stopping at CODE & MESSAGE.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: recursive json_populate_record()

2017-05-22 Thread Nikita Glukhov

Attached two small fixes for the previous committed patch:

1. I've noticed a difference in behavior between json_populate_record()
and  jsonb_populate_record() if we are trying to populate record from a
non-JSON-object: json function throws an error but jsonb function returns
a record with NULL fields. So I think it would be better to throw an error
in jsonb case too, but I'm not sure.

2. Also I've found a some kind of thinko in JsGetObjectSize() macro, but it
seems that this obvious mistake can not lead to incorrect behavior.

--
Nikita Glukhov
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company

diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c
index ab9a745..a98742f 100644
--- a/src/backend/utils/adt/jsonfuncs.c
+++ b/src/backend/utils/adt/jsonfuncs.c
@@ -308,12 +308,11 @@ typedef struct JsObject
 	((jsv)->is_json ? (jsv)->val.json.type == JSON_TOKEN_STRING \
 		: ((jsv)->val.jsonb && (jsv)->val.jsonb->type == jbvString))
 
-#define JsObjectSize(jso) \
+#define JsObjectIsEmpty(jso) \
 	((jso)->is_json \
-		? hash_get_num_entries((jso)->val.json_hash) \
-		: !(jso)->val.jsonb_cont || JsonContainerSize((jso)->val.jsonb_cont))
-
-#define JsObjectIsEmpty(jso) (JsObjectSize(jso) == 0)
+		? hash_get_num_entries((jso)->val.json_hash) == 0 \
+		: (!(jso)->val.jsonb_cont || \
+		   JsonContainerSize((jso)->val.jsonb_cont) == 0))
 
 #define JsObjectFree(jso) do { \
 		if ((jso)->is_json) \
diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c
index a98742f..083982c 100644
--- a/src/backend/utils/adt/jsonfuncs.c
+++ b/src/backend/utils/adt/jsonfuncs.c
@@ -2683,7 +2683,20 @@ JsValueToJsObject(JsValue *jsv, JsObject *jso)
 			JsonContainerIsObject(jbv->val.binary.data))
 			jso->val.jsonb_cont = jbv->val.binary.data;
 		else
+		{
+			bool		is_scalar = IsAJsonbScalar(jbv) ||
+(jbv->type == jbvBinary &&
+ JsonContainerIsScalar(jbv->val.binary.data));
+
+			ereport(ERROR,
+	(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+	 errmsg(is_scalar
+			? "cannot call %s on a scalar"
+			: "cannot call %s on an array",
+			"populate_composite")));
+
 			jso->val.jsonb_cont = NULL;
+		}
 	}
 }
 
diff --git a/src/test/regress/expected/jsonb.out b/src/test/regress/expected/jsonb.out
index f199eb4..7954703 100644
--- a/src/test/regress/expected/jsonb.out
+++ b/src/test/regress/expected/jsonb.out
@@ -2276,17 +2276,9 @@ SELECT jsa FROM jsonb_populate_record(NULL::jsbrec, '{"jsa": ["aaa", null, [1, 2
 (1 row)
 
 SELECT rec FROM jsonb_populate_record(NULL::jsbrec, '{"rec": 123}') q;
- rec  
---
- (,,)
-(1 row)
-
+ERROR:  cannot call populate_composite on a scalar
 SELECT rec FROM jsonb_populate_record(NULL::jsbrec, '{"rec": [1, 2]}') q;
- rec  
---
- (,,)
-(1 row)
-
+ERROR:  cannot call populate_composite on an array
 SELECT rec FROM jsonb_populate_record(NULL::jsbrec, '{"rec": {"a": "abc", "c": "01.02.2003", "x": 43.2}}') q;
 rec
 ---
@@ -2303,11 +2295,7 @@ SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": 123}') q;
 ERROR:  expected json array
 HINT:  see the value of key "reca"
 SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": [1, 2]}') q;
-  reca   
--
- {"(,,)","(,,)"}
-(1 row)
-
+ERROR:  cannot call populate_composite on a scalar
 SELECT reca FROM jsonb_populate_record(NULL::jsbrec, '{"reca": [{"a": "abc", "b": 456}, null, {"c": "01.02.2003", "x": 43.2}]}') q;
   reca  
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql - add special variable to reflect the last query status

2017-05-22 Thread Pavel Stehule
2017-05-22 9:40 GMT+02:00 Fabien COELHO :

>
> Hello Pavel,
>
> After some discussions about what could be useful since psql scripts now
 accepts tests, this patch sets a few variables which can be used by psql
 after a "front door" (i.e. actually typed by the user) query:

  - RESULT_STATUS: the status of the query
  - ERROR: whether the query failed
  - ERROR_MESSAGE: ...
  - ROW_COUNT: #rows affected

  SELECT * FROM ;
  \if :ERROR
\echo oops
\q
  \endif

 I'm not sure that the names are right. Maybe STATUS would be better than
 RESULT_STATUS.

>>>
>>> I am sending review of this patch:
>>
>> 1. I agree so STATUS is better name, than RESULT status.
>>
>
> Ok, looks simpler.
>
> Currently it returns values with prefix PGRES (like PGRES_FATAL_ERROR,
>> PGRES_TUPLES_OK). Maybe we should to cut this prefix. FATAL_ERROR,
>> TUPLES_OK looks better for custom level. The PGRES prefix has not sense in
>> psql.
>>
>
> Indeed. I skipped "PGRES_".
>
> 2. I propose availability to read ERROR_CODE - sometimes it can be more
>> practical than parsing error possible translated message
>>
>
> Ok.
>
> 3. The fields ERROR_MESSAGE and ROW_COUNT are set only when it has sense.
>> This behave is maybe too strict for psql and the processing needs more
>> nesting \if command. What do you think about -1 or 0 for ROW_COUNT (for
>> DDL) and "" for ERROR_MESSAGE when there are not any error?  It will be
>> consistent with already implemented LASTOID variable (and other state psql
>> variables). Using default values are not strict clean, but it can reduce
>> complexity of psql scripts.
>>
>
> My intention was that it could be tested with the "is defined" syntax,
> which is yet to be agreed upon and implemented, so maybe generating empty
> string is a better option.
>
> For ROW_COUNT, I think that it should be consistent with what PL/pgSQL
> does, so it think that 0 should be the default.
>
> 4. all regress tests passed
>> 5. there are not any problem with doc building
>>
>
> Please find attached a v2 which hopefully takes into account all your
> points above.
>
> Open question: should it gather more PQerrorResultField, or the two
> selected one are enough? If more, which should be included?


I don't think so it is necessary. No in this moment. ERROR_CODE and
ERROR_MESSAGE are fundamental - and if we add other, then we should to add
all. Has not sense to add only some.

Regards

Pavel


>
> --
> Fabien.


Re: [HACKERS] plpgsql, caching, resowners and jit

2017-05-22 Thread Tom Lane
Andres Freund  writes:
> After doing so, I got pretty weird crashes.  A bit of debugging later it
> became apparent that the issue is in how plpgsql caches expression
> states: ...
> which means we'll re-use ExprStates built in another subtransaction.

Sure, why not?  They generally aren't going to change across
subtransactions.

I do not recall that there's a separate resowner for those things
though.  Usually the idea for ExprState-related resources is to
clean them up in an ExprContextCallback, cf executor/functions.c.
Might work better if you attack it that way instead of via a resowner.

Or maybe we could set up a separate resowner for plpgsql's simple
expression states.  Not sure what consequences that would have,
but it doesn't seem unreasonable on its face.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Is it possible to get query_string value in an event trigger?

2017-05-22 Thread Jeremy Finzel
Hello.  Is it possible to get the currently executing query in an event
trigger, for example, a create table event trigger function firing after
ddl_command_end, aside from checking pg_stat_activity for the current
process?

When I am debugging the source code, after executing a statement I can see
the variable query_string which has the entire SQL statement, however, I
would be very interested if it's possible to get this query_string value in
a c function called during an event trigger, for a given process, or if by
that time the memory has been freed and/or it's just not possible for some
other reason to get the query string of a particular process.

Any thoughts much appreciated.


Re: [HACKERS] PostgreSQL 10beta1 / OpenBSD : compilation failed with libxml

2017-05-22 Thread Pierre-Emmanuel André
On Mon, May 22, 2017 at 10:36:41AM -0400, Tom Lane wrote:
> Pierre-Emmanuel =?iso-8859-15?Q?Andr=E9?=  writes:
> > I still have an issue with OpenBSD -current and PostgreSQL 10beta1.
> 
> > common.o: In function `psql_get_variable':
> > common.c:(.text+0x114c): undefined reference to `appendShellStringNoError'
> > mainloop.o: In function `MainLoop':
> > mainloop.c:(.text+0xcd): undefined reference to `psql_scan_set_passthrough'
> > startup.o: In function `main':
> > startup.c:(.text+0x1b01): undefined reference to `psql_scan_set_passthrough'
> > collect2: ld returned 1 exit status
> 
> It looks like you are somehow linking to a pre-v10 version of
> libpgfeutils.a.  Maybe that is installed in /usr/local/lib or someplace?
> 
> > When i remove --libxml everything runs fine.
> 
> That's odd.  I think --with-libxml changes the set of -L directives,
> which could be a mechanism to explain that, but it's sure not clear
> why libxml would be bringing libpgfeutils along with it.
> 
> Note to hackers: seems like we'd better arrange for
> -L../../../src/fe_utils to appear earlier in psql's link command
> than it now does.  I think we thought we could get away with being
> sloppy because we weren't planning on libpgfeutils getting installed
> anywhere ... but that idea seems to have gone by the wayside.
> 


Hi Tom,

You're right. My build environnement was fucked up (2 versions of PostgreSQL).
I cleaned up and now everything runs fine:

# select version() ;
version
---
PostgreSQL 10beta1 on x86_64-unknown-openbsd6.1, compiled by cc (GCC) 4.2.1 
20070719 , 64-bit



(But it seems that libxml changes the order of -L directives)

Sorry for the noise.

Regards,




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] plpgsql, caching, resowners and jit

2017-05-22 Thread Andres Freund
Hi,

While hacking a bit more on my JIT prototype (so I actually know what
I'm talking about at pgcon), I encountered an interesting issue.  To
keep track of the lifetime of JITed functions it seems natural to add
support for that to resowners.  Not that hard.

After doing so, I got pretty weird crashes.  A bit of debugging later it
became apparent that the issue is in how plpgsql caches expression
states:
typedef struct PLpgSQL_expr
{
...
 * if expr is simple AND prepared in current transaction,
 * expr_simple_state and expr_simple_in_use are valid. Test validity by
 * seeing if expr_simple_lxid matches current LXID.  (If not,
 * expr_simple_state probably points at garbage!)
 */
ExprState  *expr_simple_state;  /* eval tree for 
expr_simple_expr */
boolexpr_simple_in_use; /* true if eval tree is 
active */
LocalTransactionId expr_simple_lxid;
...

which we test in functions like exec_eval_simple_expr like:
/*
 * Prepare the expression for execution, if it's not been done already 
in
 * the current transaction.  (This will be forced to happen if we called
 * exec_simple_recheck_plan above.)
 */
if (expr->expr_simple_lxid != curlxid)

which means we'll re-use ExprStates built in another subtransaction.  In
the case causing trouble, the ExprState has been built in another
subtransaction, that subxact has been aborted, and thus the function
released.  And to make it even more confusing, in the case I noticed
first the memory had since been reused by a *different* function :/.

Is it really intentional that plpgsql uses ExprStates built in aborted
subtransactions?  It seems to mostly work because the cache
invalidations triggered by the subabort trigger replans in many cases,
but I'm very doubtful that's complete.


It's easy enough to "solve" in the case of JIT by only releasing
functions at eoxact, by reassigning them upwards, but that doesn't
strike me as a nice approach.

Comments? Ideas?


- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] Concurrent ALTER SEQUENCE RESTART Regression

2017-05-22 Thread Andres Freund
On 2017-05-19 08:31:15 -0400, Robert Haas wrote:
> On Thu, May 18, 2017 at 4:54 PM, Andres Freund  wrote:
> > There's still weird behaviour, unfortunately.  If you do an ALTER
> > SEQUENCE changing minval/maxval w/ restart in a transaction, and abort,
> > you'll a) quite possibly not be able to use the sequence anymore,
> > because it may of bounds b) DDL still isn't transactional.
>
> Your emails would be a bit easier to understand if you included a few
> more words.

Yea - I'd explained this one already somewhere upthread, and I'd hoped
it'd be enough, but I probably was wrong.


> I'm guessing "may of bounds" is supposed to say "may be out of bounds"?

Yes.


Consider a scenario like:

S1: CREATE SEQUENCE oobounds MINVALUE 1 START 1;
S1: SELECT nextval('oobounds'); -> 1
S2: BEGIN;
S2: ALTER SEQUENCE oobounds MAXVALUE -10 START -10 MINVALUE -1000 INCREMENT BY 
-1 RESTART;
S2: SELECT nextval('oobounds'); -> -10
S2: ROLLBACK;
S1: SELECT * FROM pg_sequence WHERE seqrelid = 'oobounds'::regclass;
┌──┬──┬──┬──┬─┬┬──┬──┐
│ seqrelid │ seqtypid │ seqstart │ seqincrement │   seqmax│ seqmin 
│ seqcache │ seqcycle │
├──┼──┼──┼──┼─┼┼──┼──┤
│   203401 │   20 │1 │1 │ 9223372036854775807 │  1 
│1 │ f│
└──┴──┴──┴──┴─┴┴──┴──┘
S1: SELECT nextval('oobounds'); -> -9

Ooops.

Two issues: Firstly, we get a value smaller than seqmin, obviously
that's not ok. But even if we'd error out, it'd imo still not be ok,
because we have a command that behaves partially transactionally
(keeping the seqmax/min transactionally), partially not (keeping the
current sequence state at -9).

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: pgindent (was Re: [HACKERS] [COMMITTERS] pgsql: Preventive maintenance in advance of pgindent run.)

2017-05-22 Thread Tom Lane
Piotr Stefaniak  writes:
> On 2017-05-22 01:50, Tom Lane wrote:
>> Being lazy, I just wiped my copy and re-cloned, but it still seems the
>> same as before ... last commit on the pass3 branch is from Mar 4.
>> What branch should I be paying attention to?

> pass3 is the right branch. A fresh clone should have worked as in the
> attached log.

Ah, I now see that the code did change, I'd just been confused by
the "git log" history.

Small thought: shouldn't your updated code in pr_comment be changed to

-   for (t_ptr = e_com + len - 1; t_ptr > e_com; t_ptr--)
+   for (t_ptr = e_com + len - 1; t_ptr >= e_com; t_ptr--)

Perhaps it's impossible for the character right at e_com to be a space,
but there seems no need for this loop to assume that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PG10 Crash-safe and replicable Hash Indexes and UNIQUE

2017-05-22 Thread Chapman Flack
On 05/22/2017 05:16 AM, Amit Kapila wrote:
> Agreed, but even if we have any such syntax, making it work for hash
> indexes is tricky, because we currently store the hash code in the
> index, not the original hash index key.

That was what gave me the idea in the first place, which then
I realized could be more generally useful. If I could say
something like

CREATE INDEX ON foo USING btree ( bar, baz ALSO quux );

so that only bar and baz are compared in insertion and search,
but quux is along for the ride and available to index-only scans,
then the (admittedly weird) syntax

CREATE INDEX ON foo USING hash ( bar ALSO bar );

could be taken to mean that the value of bar as well as its hash
is wanted in the index. I was first thinking of that to save
unique-insert from running to the heap to check hash collisions,
though on second thought if collisions are common enough for that
to be a win, maybe the hash function's just wrong. It could still
be useful for index-only scans.

-Chap



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 10beta1 / OpenBSD : compilation failed with libxml

2017-05-22 Thread Tom Lane
Pierre-Emmanuel =?iso-8859-15?Q?Andr=E9?=  writes:
> I still have an issue with OpenBSD -current and PostgreSQL 10beta1.

> common.o: In function `psql_get_variable':
> common.c:(.text+0x114c): undefined reference to `appendShellStringNoError'
> mainloop.o: In function `MainLoop':
> mainloop.c:(.text+0xcd): undefined reference to `psql_scan_set_passthrough'
> startup.o: In function `main':
> startup.c:(.text+0x1b01): undefined reference to `psql_scan_set_passthrough'
> collect2: ld returned 1 exit status

It looks like you are somehow linking to a pre-v10 version of
libpgfeutils.a.  Maybe that is installed in /usr/local/lib or someplace?

> When i remove --libxml everything runs fine.

That's odd.  I think --with-libxml changes the set of -L directives,
which could be a mechanism to explain that, but it's sure not clear
why libxml would be bringing libpgfeutils along with it.

Note to hackers: seems like we'd better arrange for
-L../../../src/fe_utils to appear earlier in psql's link command
than it now does.  I think we thought we could get away with being
sloppy because we weren't planning on libpgfeutils getting installed
anywhere ... but that idea seems to have gone by the wayside.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] wal_level > WAL_LEVEL_LOGICAL

2017-05-22 Thread Neha Khatri
As per my understabding, current postgres server supports only three
values for wal_level i.e. 'minimal' , 'replica' or 'logical'. But
following error message brought to notice that there are various code
spots that try to look for wal_level >= WAL_LEVEL_LOGICAL:

  select * from pg_create_logical_replication_slot('regression_slot',
'test_decoding');
  ERROR:  logical decoding requires wal_level >= logical

The code locations that look for/expect wal_level > WAL_LEVEL_LOGICAL are:

 heapam.c 7690 * This is only used in wal_level >=
WAL_LEVEL_LOGICAL, and only for catalog
 logical.c   83 errmsg("logical decoding requires wal_level >=
logical")));
 standby.cLogStandbySnapshot   950 if (wal_level
>= WAL_LEVEL_LOGICAL)
 xlog.h   XLogLogicalInfoActive162 #define
XLogLogicalInfoActive() (wal_level >= WAL_LEVEL_LOGICAL)

Since postgres does not allow wal_level > WAL_LEVEL_LOGICAL, the above
code locations should be modified like:

 s/>=/=

Thoughts/Suggestions?

Regards,
Neha


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump ignoring information_schema tables which used in Create Publication.

2017-05-22 Thread Kuntal Ghosh
On Mon, May 22, 2017 at 5:22 PM, tushar  wrote:
> On 05/22/2017 05:12 PM, Kuntal Ghosh wrote:
>>
>> pg_dump ignores anything created under object name "pg_*" or
>> "information_schema".
>
> In this below scenario  , I am able to see - pg_dump catch the information
> of table which is created under information_schema
>
> --
> -- Name: e1; Type: VIEW; Schema: public; Owner: centos
> --
>
> CREATE VIEW e1 AS
>  SELECT abc.n
>FROM information_schema.abc;
> 
>
The view is created in public schema. Hence, you're able to take a
dump for the same. However, you'll probably get an error saying
"relation information_schema.abc doesn't exist" while restoring the
dump.

For publications, the create definition(CREATE PUBLICATION) and
addition of tables(ALTER publication ADD TABLE) are separated. Hence,
it can skip all the relations created under information_schema or
anything under "pg_*" schema.

-- 
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump ignoring information_schema tables which used in Create Publication.

2017-05-22 Thread tushar

On 05/22/2017 05:31 PM, Tom Lane wrote:

Do we have a prohibition against publishing/subscribing anything
in pg_catalog?

Yes.

postgres=# create publication pub for table pg_catalog.pg_AM;
ERROR:  "pg_am" is a system table
DETAIL:  System tables cannot be added to publications.
postgres=#

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump ignoring information_schema tables which used in Create Publication.

2017-05-22 Thread Tom Lane
tushar  writes:
> On 05/22/2017 05:12 PM, Kuntal Ghosh wrote:
>> pg_dump ignores anything created under object name "pg_*" or
>> "information_schema".

> In this below scenario  , I am able to see - pg_dump catch the 
> information of table which is created under information_schema

Creating your own tables inside information_schema is not considered
a supported operation anyway.  In general, there should be only
system views in there, so there's no point in publishing them.

Do we have a prohibition against publishing/subscribing anything
in pg_catalog?  (Please tell me the answer is yes, because if it's
no, I'll bet good money that attempting to do so reveals all sorts
of bugs.)  I would suggest that information_schema, and probably
pg_toast, should have the same restriction.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Increasing parallel workers at runtime

2017-05-22 Thread Kuntal Ghosh
On Mon, May 22, 2017 at 2:54 PM, Rafia Sabih
 wrote:
> On Wed, May 17, 2017 at 2:57 PM, Amit Kapila  wrote:
>> On Tue, May 16, 2017 at 2:14 PM, Ashutosh Bapat
>>  wrote:
>>> On Mon, May 15, 2017 at 9:23 PM, Robert Haas  wrote:
>>>
>>> Also, looking at the patch, it doesn't look like it take enough care
>>> to build execution state of new worker so that it can participate in a
>>> running query. I may be wrong, but the execution state initialization
>>> routines are written with the assumption that all the workers start
>>> simultaneously?
>>>
>>
>> No such assumptions, workers started later can also join the execution
>> of the query.
>>
> If we are talking of run-time allocation of workers I'd like to
> propose an idea to safeguard parallelism from selectivity-estimation
> errors. Start each query (if it qualifies for the use of parallelism)
> with a minimum number of workers (say 2) irrespective of the #planned
> workers. Then as query proceeds and we find that there is more work to
> do, we allocate more workers.
>
> Let's get to the details a little, we'll have following new variables,
> - T_int - a time interval at which we'll periodically check if the
> query requires more workers,
> - work_remaining - a variable which estimates the work yet to do. This
> will use the selectivity estimates to find the total work done and the
> remaining work accordingly. Once, the actual number of rows crosses
> the estimated number of rows, take maximum possible tuples for that
> operator as the new estimate.
>
> Now, we'll check at gather, after each T_int if the work is remaining
> and allocate another 2 (say) workers. This way we'll keep on adding
> the workers in small chunks and not in one go. Thus, saving resources
> in case over-estimation is done.
>
I understand your concern about selectivity estimation error which
affects the number of workers planned as well. But, in that case, I
would like to fix the optimizer so that it calculates the number of
workers correctly. If the optimizer thinks that we should start with n
number of workers, probably we SHOULD start with n number of workers.

However, error in selectivity estimation(The root of all evil, the
Achilles Heel of query optimization, according to Guy Lohman et al.
:)) can always prove the optimizer wrong. In that case, +1 for your
suggested approach of dynamically add or kill some workers based on
the estimated work left to do.

-- 
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump ignoring information_schema tables which used in Create Publication.

2017-05-22 Thread tushar

On 05/22/2017 05:12 PM, Kuntal Ghosh wrote:

pg_dump ignores anything created under object name "pg_*" or
"information_schema".
In this below scenario  , I am able to see - pg_dump catch the 
information of table which is created under information_schema


postgres=# create database  ntest;
\CREATE DATABASE
postgres=# \c ntest
You are now connected to database "ntest" as user "centos".
ntest=# create table information_schema.abc(n int);
CREATE TABLE
ntest=# create   view e1  as select * from information_schema.abc;
CREATE VIEW

[centos@centos-cpula regress]$ pg_dump -Fp  ntest > /tmp/a.a

cat /tmp/a.a

SET search_path = public, pg_catalog;

--
-- Name: e1; Type: VIEW; Schema: public; Owner: centos
--

CREATE VIEW e1 AS
 SELECT abc.n
   FROM information_schema.abc;


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump ignoring information_schema tables which used in Create Publication.

2017-05-22 Thread Kuntal Ghosh
Hello,

pg_dump ignores anything created under object name "pg_*" or
"information_schema". I guess you will not have any "CREATE TABLE"
definition  as well for information_schema.abc. Related code:

else if (strncmp(nsinfo->dobj.name, "pg_", 3) == 0 ||
 strcmp(nsinfo->dobj.name, "information_schema") == 0)
{
/* Other system schemas don't get dumped */
nsinfo->dobj.dump_contains = nsinfo->dobj.dump = DUMP_COMPONENT_NONE;
}

Hence, there is no point of creating publication for it in the dump.

On Mon, May 22, 2017 at 4:22 PM, tushar  wrote:
> Hi,
>
> pg_dump is ignoring tables which created under information_schema schema
> for  CREATE PUBLICATION .
>
> postgres=# create database  test;
> CREATE DATABASE
> postgres=# \c test
> You are now connected to database "test" as user "centos".
> test=# create table information_schema.abc(n int);
> CREATE TABLE
> test=# create publication test for table information_schema.abc;
> CREATE PUBLICATION
> test=# select * from pg_publication_tables;
>  pubname | schemaname | tablename
> -++---
>  test| information_schema | abc
> (1 row)
>
> test=# \q
> [centos@centos-cpula regress]$ pg_dump -Fp  test > /tmp/a.a
> [centos@centos-cpula regress]$ cat /tmp/a.a|grep publication -i
> -- Name: test; Type: PUBLICATION; Schema: -; Owner: centos
> CREATE PUBLICATION test WITH (publish = 'insert, update, delete');
> ALTER PUBLICATION test OWNER TO centos;
> [centos@centos-cpula regress]$
>
> --
> regards,tushar
> EnterpriseDB  https://www.enterprisedb.com/
> The Enterprise PostgreSQL Company
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers



-- 
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Index created in BEFORE trigger not updated during INSERT

2017-05-22 Thread Albe Laurenz
Not that it is a useful use case, but I believe that this is
a bug that causes index corruption:

CREATE TABLE mytable(
   id integer PRIMARY KEY,
   id2 integer NOT NULL
);

CREATE FUNCTION makeindex() RETURNS trigger
   LANGUAGE plpgsql AS
$$BEGIN
   CREATE INDEX ON mytable(id2);
   RETURN NEW;
END;$$;

CREATE TRIGGER makeindex BEFORE INSERT ON mytable FOR EACH ROW
   EXECUTE PROCEDURE makeindex();

INSERT INTO mytable VALUES (1, 42);

SELECT * FROM mytable;
┌┬─┐
│ id │ id2 │
├┼─┤
│  1 │  42 │
└┴─┘
(1 row)

But 42 is not present in the index:

SET enable_seqscan = off;

SELECT * FROM mytable WHERE id2 = 42;
┌┬─┐
│ id │ id2 │
├┼─┤
└┴─┘
(0 rows)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] "create publication..all tables" ignore 'partition not supported' error

2017-05-22 Thread Kuntal Ghosh
Yeah, it's a bug. While showing the table definition, we use the
following query for showing the related publications:
"SELECT pub.pubname\n"
  " FROM pg_catalog.pg_publication pub\n"
  " LEFT JOIN pg_catalog.pg_publication_rel pr\n"
  "  ON (pr.prpubid = pub.oid)\n"
  "WHERE pr.prrelid = '%s' OR pub.puballtables\n"
  "ORDER BY 1;"

When pub.puballtables is TRUE, we should also check whether the
relation is publishable or not.(Something like is_publishable_class
method in pg_publication.c).

However, I'm not sure whether this is the correct way to solve the problem.

On Mon, May 22, 2017 at 2:39 PM, tushar  wrote:
> Hi,
>
> I observed that - "create publication..all tables" ignore 'partition not
> supported' error
>
> \\create a partition table
>
> You are now connected to database "s" as user "centos".
> s=# CREATE TABLE measurement (
> s(# city_id int not null,
> s(# logdate date not null,
> s(# peaktempint,
> s(# unitsales   int
> s(# ) PARTITION BY RANGE (logdate);
> CREATE TABLE
> s=#
>
> \\try to publish only this table
>
> s=# create publication p for table  measurement;
> ERROR:  "measurement" is a partitioned table
> DETAIL:  Adding partitioned tables to publications is not supported.
> HINT:  You can add the table partitions individually.
>
> \\try to create publication for all tables
> s=# create publication p for all tables ;
> CREATE PUBLICATION
> s=# \d+ measurement
>  Table "public.measurement"
>   Column   |  Type   | Collation | Nullable | Default | Storage | Stats
> target | Description
> ---+-+---+--+-+-+--+-
>  city_id   | integer |   | not null | | plain |
> |
>  logdate   | date|   | not null | | plain |
> |
>  peaktemp  | integer |   |  | | plain |
> |
>  unitsales | integer |   |  | | plain |
> |
> Partition key: RANGE (logdate)
> Publications:
> "p"
>
> Publication 'p' has been set against partition table ,which is not
> supported.
>
> --
> regards,tushar
> EnterpriseDB  https://www.enterprisedb.com/
> The Enterprise PostgreSQL Company
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers



-- 
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_dump ignoring information_schema tables which used in Create Publication.

2017-05-22 Thread tushar

Hi,

pg_dump is ignoring tables which created under information_schema 
schema  for  CREATE PUBLICATION .


postgres=# create database  test;
CREATE DATABASE
postgres=# \c test
You are now connected to database "test" as user "centos".
test=# create table information_schema.abc(n int);
CREATE TABLE
test=# create publication test for table information_schema.abc;
CREATE PUBLICATION
test=# select * from pg_publication_tables;
 pubname | schemaname | tablename
-++---
 test| information_schema | abc
(1 row)

test=# \q
[centos@centos-cpula regress]$ pg_dump -Fp  test > /tmp/a.a
[centos@centos-cpula regress]$ cat /tmp/a.a|grep publication -i
-- Name: test; Type: PUBLICATION; Schema: -; Owner: centos
CREATE PUBLICATION test WITH (publish = 'insert, update, delete');
ALTER PUBLICATION test OWNER TO centos;
[centos@centos-cpula regress]$

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PostgreSQL 10beta1 / OpenBSD : compilation failed with libxml

2017-05-22 Thread Pierre-Emmanuel André
Hi,

I still have an issue with OpenBSD -current and PostgreSQL 10beta1.
I tried to build it with these options: --with-openssl=/usr --with-perl 
--with-pam=no --with-uuid=bsd --enable-integer-datetimes 
--with-system-tzdata=/usr/share/zoneinfo --with-openssl --disable-thread-safety 
--with-libxml

And I have this error:

gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement 
-Wendif-labels -Wmissing-format-attribute -Wformat-security 
-fno-strict-aliasing -fwrapv -O2 command.o common.o conditional.o copy.o 
crosstabview.o describe.o help.o input.o large_obj.o mainloop.o prompt.o 
psqlscanslash.o sql_help.o startup.o stringutils.o tab-complete.o variables.o  
-L../../../src/common -lpgcommon -L../../../src/port -lpgport 
-L../../../src/interfaces/libpq -lpq -L../../../src/port -L../../../src/common 
-L/usr/local/lib -L/usr/lib -L/usr/local/lib -L/usr/local/lib  -Wl,-Bdynamic 
-Wl,-R'/usr/local/pgsql/lib' -L../../../src/fe_utils -lpgfeutils -lpq  
-lpgcommon -lpgport -lxml2 -lssl -lcrypto -lz -lreadline -ltermcap -lm  -o psql
command.o: In function `exec_command_set':
command.c:(.text+0x4e9a): warning: warning: strcat() is almost always misused, 
please use strlcat()
describe.o: In function `listTSParsers':
describe.c:(.text+0x4e06): warning: warning: sprintf() is often misused, please 
use snprintf()
large_obj.o: In function `do_lo_import':
large_obj.c:(.text+0x637): warning: warning: strcpy() is almost always misused, 
please use strlcpy()
common.o: In function `psql_get_variable':
common.c:(.text+0x114c): undefined reference to `appendShellStringNoError'
mainloop.o: In function `MainLoop':
mainloop.c:(.text+0xcd): undefined reference to `psql_scan_set_passthrough'
startup.o: In function `main':
startup.c:(.text+0x1b01): undefined reference to `psql_scan_set_passthrough'
collect2: ld returned 1 exit status
gmake[3]: *** [Makefile:34: psql] Error 1


When i remove --libxml everything runs fine.
Any ideas ?


Regards,


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Increasing parallel workers at runtime

2017-05-22 Thread Rafia Sabih
On Wed, May 17, 2017 at 2:57 PM, Amit Kapila  wrote:
> On Tue, May 16, 2017 at 2:14 PM, Ashutosh Bapat
>  wrote:
>> On Mon, May 15, 2017 at 9:23 PM, Robert Haas  wrote:
>>
>> Also, looking at the patch, it doesn't look like it take enough care
>> to build execution state of new worker so that it can participate in a
>> running query. I may be wrong, but the execution state initialization
>> routines are written with the assumption that all the workers start
>> simultaneously?
>>
>
> No such assumptions, workers started later can also join the execution
> of the query.
>
If we are talking of run-time allocation of workers I'd like to
propose an idea to safeguard parallelism from selectivity-estimation
errors. Start each query (if it qualifies for the use of parallelism)
with a minimum number of workers (say 2) irrespective of the #planned
workers. Then as query proceeds and we find that there is more work to
do, we allocate more workers.

Let's get to the details a little, we'll have following new variables,
- T_int - a time interval at which we'll periodically check if the
query requires more workers,
- work_remaining - a variable which estimates the work yet to do. This
will use the selectivity estimates to find the total work done and the
remaining work accordingly. Once, the actual number of rows crosses
the estimated number of rows, take maximum possible tuples for that
operator as the new estimate.

Now, we'll check at gather, after each T_int if the work is remaining
and allocate another 2 (say) workers. This way we'll keep on adding
the workers in small chunks and not in one go. Thus, saving resources
in case over-estimation is done.

Some of the things we may improvise upon are,
- check if we want to increase workers or kill some of them. e.g. if
the filtering is not happening at estimated at the node, i.e. #output
tuples is same as #input tuples, then do not add any more workers as
it will increase the work at gather only.
- Instead of just having a number of 2 or 4 workers at the start,
allocate x% of planned workers.
- As the query progresses, we may alter the value of T_int and/or
#workers to allocate. e.g. till a query is done something less than
50%, check at every T_int, after that increase T_int to T_int(1 + .5),
similarly for #workers, because now allocating more workers might not
do much work rather the cost of adding new workers could be more.

This scheme is likely to safeguard parallelism with selectivity
estimation errors in a sense that it is using resources only when
required.
Certainly there are many more things to think about in terms of
implementation and the cases where this can help or regress, will be
glad to know opinion of more people on this.

-- 
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PG10 Crash-safe and replicable Hash Indexes and UNIQUE

2017-05-22 Thread Amit Kapila
On Mon, May 22, 2017 at 8:31 AM, Chapman Flack  wrote:
> On 05/19/17 11:41, Tom Lane wrote:
>
>> No, nobody's done anything about allowing hash indexes to support
>> uniqueness AFAIK.  I don't have a clear picture of how much work
>> it would be, but it would likely be more than trivial effort;
>
> I see what you mean. Because of the way hash values are ordered
> (to allow binary search) within a page, but not between pages of
> a bucket, insertion as it stands now is able to stop as soon as
> it finds any page with room for the entry, but a unique-insertion
> will have to check every page of the bucket for matching hashes,
> and then (because only the hash and tid are in the index) chase
> any of those to the heap to compare the value.
>
> Maybe both hash collisions and overflow pages are rare enough
> in practice with reasonable data that the performance impact
> of that would be small, but still the possibility has to be
> accounted for, the locking may get hairier (do you now keep
> the lock you have on the page where room was found for the entry,
> and use another lock to walk the remaining pages until sure
> there's no duplicate?).
>
> At least I see that interest in UNIQUE for hash indexes has been
> shown on -hackers several times over the years, and is on the TODO.
> Neil Conway seems to have had an idea [1] for making the locking work,
> 14 years ago (however relevant that might be to today's code).
>

I think that won't be directly applicable now as we have changed the
locking mechanism such that instead of acquiring heavy-weight locks it
uses LWLocks and for inserts and we don't hold it for more than one
bucket page at a time for inserts.  I think to make unique check work,
there are a couple of options like (a) unique insertions can always
acquire a cleanup lock on bucket page, this will work because scans
(to find duplicate values in overflow pages will hold a pin on bucket
page).  The drawback will be that unique insertions need to wait even
when there is some unrelated scan is in progress.  (b) unique
insertions can hold locks on bucket pages while traversing the bucket
chain.  It is bad to retain locks on multiple buckets for concurrency,
but in practise, unqiue indexes won't have many overflow pages. (c)
keep a flag in bucket page to indicate unique index insertion is in
progress and if such a flag is set, other insertions need to wait.  I
think this is not a preferable way because we need to take care of
clearing such a flag not only after the operation but also after crash
recovery.

Any better ideas?

> ... and one inquiry last year [2] did seem to get tabled because of the
> lack of WAL logging, which is now a non-blocker.
>
> I haven't seen much discussion of /why/ one would want hash-based UNIQUE.
> I know my own reasons, but I'm not sure how persuasive they are in light
> of the implementation realities, so maybe that makes such a discussion
> worthwhile. I can start; these are the two reasons I had:
>
> 1. To a naive intuition (especially one raised more on in-memory data
>structures than the guts of databases), it just seems natural:
>hashing seems like the canonical approach to uniqueness testing
>where there's no need for ordering, intuition suggests a performance
>advantage, and so the least-astonishment principle suffers upon finding
>it isn't supported.
>
> 2. When developing a custom data type, it feels like tedious
>busy-work to have to bang out a full set of ordering operators
>for a btree operator class if there is no meaningful order for
>the type.
>
> Maybe the intuitions behind (1) are just misinformed, the performance
> ones at least, in light of Craig Ringer's low opinion of whether "hash
> indexes are better than btree for anything" [3], and André Barbosa's
> more recent performance comparison [4] (which does show some advantages
> for hash in some circumstances, but mostly not large. The only large
> advantage was in initial creation; would that be hashsort.c at work?).
>
> But then, both [3] and [4] predate the recent projects on hash indexes
> that have "made them crash safe and are on the way to making them
> performant" [5], so maybe an updated comparison would be timely, or some
> addition to the docs to better characterize the circumstances where hash
> could be good.
>

I think the performance characteristics of hash indexes have changed
especially for read-only cases after recent work.  We have done many
tests which indicate that hash indexes perform better than btree when
the column value is unique.  You might want to try that based on your
usecase.

> (Every index method newer than btree and hash has its own
> part VII Internals chapter; for completeness, might it make sense to have
> those for btree and hash also, even if only to broadly discuss
> the conditions under which they perform especially well or poorly?)
>
> For all sorts of indexes, would there be any use for some CREATE INDEX
> syntax 

[HACKERS] "create publication..all tables" ignore 'partition not supported' error

2017-05-22 Thread tushar

Hi,

I observed that - "create publication..all tables" ignore 'partition not 
supported' error


\\create a partition table

You are now connected to database "s" as user "centos".
s=# CREATE TABLE measurement (
s(# city_id int not null,
s(# logdate date not null,
s(# peaktempint,
s(# unitsales   int
s(# ) PARTITION BY RANGE (logdate);
CREATE TABLE
s=#

\\try to publish only this table

s=# create publication p for table  measurement;
ERROR:  "measurement" is a partitioned table
DETAIL:  Adding partitioned tables to publications is not supported.
HINT:  You can add the table partitions individually.

\\try to create publication for all tables
s=# create publication p for all tables ;
CREATE PUBLICATION
s=# \d+ measurement
 Table "public.measurement"
  Column   |  Type   | Collation | Nullable | Default | Storage | Stats 
target | Description

---+-+---+--+-+-+--+-
 city_id   | integer |   | not null | | plain 
|  |
 logdate   | date|   | not null | | plain 
|  |
 peaktemp  | integer |   |  | | plain 
|  |
 unitsales | integer |   |  | | plain 
|  |

Partition key: RANGE (logdate)
Publications:
"p"

Publication 'p' has been set against partition table ,which is not 
supported.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [POC] hash partitioning

2017-05-22 Thread amul sul
On Fri, May 19, 2017 at 10:35 PM, Robert Haas  wrote:
> On Fri, May 19, 2017 at 5:32 AM, amul sul  wrote:
>> Updated patch attached.  0001-patch rebased against latest head.
>> 0002-patch also incorporates code comments and error message changes
>> as per Robert's & your suggestions. Thanks !
>
> -if (spec->strategy != PARTITION_STRATEGY_LIST)
> -elog(ERROR, "invalid strategy in partition bound spec");
> +Assert(spec->strategy == PARTITION_STRATEGY_LIST);
>
> Let's just drop these hunks.  I realize this is a response to a review
> comment I made, but I take it back.  If the existing code is already
> doing it this way, there's no real need to revise it.  The patch
> doesn't even make it consistent anyway, since elsewhere you elog() for
> a similar case.  Perhaps elog() is best anyway.
>
Done.

> -partitioning methods include range and list, where each partition is
> -assigned a range of keys and a list of keys, respectively.
> +partitioning methods include hash, range and list, where each partition 
> is
> +assigned a modulus and remainder of keys, a range of keys and a list of
> +keys, respectively.
>
> I think this sentence has become too long and unwieldy, and is more
> unclear than helpful.  I'd just write "The currently supported
> partitioning methods are list, range, and hash."  The use of the word
> include is actually wrong here, because it implies that there are more
> not mentioned here, which is false.
>
Done.

> -  expression.  If no btree operator class is specified when creating a
> -  partitioned table, the default btree operator class for the datatype 
> will
> -  be used.  If there is none, an error will be reported.
> +  expression.  List and range partitioning uses only btree operator 
> class.
> +  Hash partitioning uses only hash operator class. If no operator class 
> is
> +  specified when creating a partitioned table, the default operator class
> +  for the datatype will be used.  If there is none, an error will be
> +  reported.
> + 
>
> I suggest: If no operator class is specified when creating a
> partitioned table, the default operator class of the appropriate type
> (btree for list and range partitioning, hash for hash partitioning)
> will be used.  If there is none, an error will be reported.
>
Done.

> + 
> +  Since hash partitiong operator class, provide only equality,
> not ordering,
> +  collation is not relevant in hash partition key column. An error will 
> be
> +  reported if collation is specified.
>
> partitiong -> partitioning.  Also, remove the comma after "operator
> class" and change "not relevant in hash partition key column" to "not
> relevant for hash partitioning".  Also change "if collation is
> specified" to "if a collation is specified".
>
Done.

> +   Create a hash partitioned table:
> +
> +CREATE TABLE orders (
> +order_id bigint not null,
> +cust_id  bigint not null,
> +status   text
> +) PARTITION BY HASH (order_id);
> +
>
> Move this down so it's just above the example of creating partitions.
>
Done.

> + * For range and list partitioned tables, datums is an array of datum-tuples
> + * with key->partnatts datums each.
> + * For hash partitioned tables, it is an array of datum-tuples with 2 datums,
> + * modulus and remainder, corresponding to a given partition.
>
> Second line is very short; reflow as one paragraph.
>
Done

>   * In case of range partitioning, it stores one entry per distinct range
>   * datum, which is the index of the partition for which a given datum
>   * is an upper bound.
> + * In the case of hash partitioning, the number of the entries in the indexes
> + * array is same as the greatest modulus amongst all partitions. For a given
> + * partition key datum-tuple, the index of the partition which would
> accept that
> + * datum-tuple would be given by the entry pointed by remainder produced when
> + * hash value of the datum-tuple is divided by the greatest modulus.
>
> Insert line break before the new text as a paragraph break.

Will wait for more inputs on Ashutosh's explanation upthread.

>
> +charstrategy;/* hash, list or range bounds? */
>
> Might be clearer to just write /* hash, list, or range? */ or /*
> bounds for hash, list, or range? */
>

Done, used "hash, list, or range?"

>
> +static uint32 compute_hash_value(PartitionKey key, Datum *values,
> bool *isnull);
> +
>
> I think there should be a blank line after this but not before it.
>

Done.

> I don't really see why hash partitioning needs to touch
> partition_bounds_equal() at all.  Why can't the existing logic work
> for hash partitioning without change?
>

Unlike list and range partition, ndatums does not represents size of
the indexes array, also dimension of datums  array in not the same as
a key->partnatts.

> +valid_bound 

Re: [HACKERS] psql - add special variable to reflect the last query status

2017-05-22 Thread Fabien COELHO


Hello Pavel,


After some discussions about what could be useful since psql scripts now
accepts tests, this patch sets a few variables which can be used by psql
after a "front door" (i.e. actually typed by the user) query:

 - RESULT_STATUS: the status of the query
 - ERROR: whether the query failed
 - ERROR_MESSAGE: ...
 - ROW_COUNT: #rows affected

 SELECT * FROM ;
 \if :ERROR
   \echo oops
   \q
 \endif

I'm not sure that the names are right. Maybe STATUS would be better than
RESULT_STATUS.



I am sending review of this patch:

1. I agree so STATUS is better name, than RESULT status.


Ok, looks simpler.

Currently it returns values with prefix PGRES (like PGRES_FATAL_ERROR, 
PGRES_TUPLES_OK). Maybe we should to cut this prefix. FATAL_ERROR, 
TUPLES_OK looks better for custom level. The PGRES prefix has not sense 
in psql.


Indeed. I skipped "PGRES_".


2. I propose availability to read ERROR_CODE - sometimes it can be more
practical than parsing error possible translated message


Ok.


3. The fields ERROR_MESSAGE and ROW_COUNT are set only when it has sense.
This behave is maybe too strict for psql and the processing needs more
nesting \if command. What do you think about -1 or 0 for ROW_COUNT (for
DDL) and "" for ERROR_MESSAGE when there are not any error?  It will be
consistent with already implemented LASTOID variable (and other state psql
variables). Using default values are not strict clean, but it can reduce
complexity of psql scripts.


My intention was that it could be tested with the "is defined" syntax, 
which is yet to be agreed upon and implemented, so maybe generating empty 
string is a better option.


For ROW_COUNT, I think that it should be consistent with what PL/pgSQL 
does, so it think that 0 should be the default.



4. all regress tests passed
5. there are not any problem with doc building


Please find attached a v2 which hopefully takes into account all your 
points above.


Open question: should it gather more PQerrorResultField, or the two 
selected one are enough? If more, which should be included?


--
Fabien.diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 3b86612..d33da32 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -3449,6 +3449,35 @@ bar
   
 
   
+   ERROR
+   
+
+ Whether the last query failed, as a boolean.
+
+   
+  
+
+  
+   ERROR_CODE
+   
+
+ The error code associated to the last query, or
+ 0 if no error occured.
+
+   
+  
+
+  
+   ERROR_MESSAGE
+   
+
+ If the last query failed, the associated error message,
+ otherwise an empty string.
+
+   
+  
+
+  
 FETCH_COUNT
 
 
@@ -3653,6 +3682,24 @@ bar
   
 
   
+   STATUS
+   
+
+ Text status of the last query.
+
+   
+  
+
+  
+   ROW_COUNT
+   
+
+ How many rows were returned or affected by the last query.
+
+   
+  
+
+  
 QUIET
 
 
diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c
index a2f1259..4a3c6a9 100644
--- a/src/bin/psql/common.c
+++ b/src/bin/psql/common.c
@@ -1213,6 +1213,62 @@ PrintQueryResults(PGresult *results)
 	return success;
 }
 
+/*
+ * Set special variables for "front door" queries
+ * - STATUS: last query status
+ * - ERROR: TRUE/FALSE, whether an error occurred
+ * - ERROR_CODE: code if an error occured, or "0"
+ * - ERROR_MESSAGE: message if an error occured, or ""
+ * - ROW_COUNT: how many rows were returned or affected, or "0"
+ */
+static void
+SetResultVariables(PGresult *results)
+{
+	bool			success;
+	ExecStatusType	restat = PQresultStatus(results);
+	char 		   *code = PQresultErrorField(results, PG_DIAG_SQLSTATE);
+	char 		   *mesg = PQresultErrorField(results, PG_DIAG_MESSAGE_PRIMARY);
+
+	SetVariable(pset.vars, "STATUS", PQresStatus(restat) + strlen("PGRES_"));
+	SetVariable(pset.vars, "ERROR_CODE", code ? code : "0");
+	SetVariable(pset.vars, "ERROR_MESSAGE", mesg ? mesg : "");
+
+	/* check all possible PGRES_ */
+	switch (restat)
+	{
+		case PGRES_EMPTY_QUERY:
+		case PGRES_TUPLES_OK:
+		case PGRES_COMMAND_OK:
+		case PGRES_COPY_OUT:
+		case PGRES_COPY_IN:
+		case PGRES_COPY_BOTH:
+		case PGRES_SINGLE_TUPLE:
+			success = true;
+			break;
+		case PGRES_BAD_RESPONSE:
+		case PGRES_NONFATAL_ERROR:
+		case PGRES_FATAL_ERROR:
+			success = false;
+			break;
+		default:
+			/* dead code */
+			success = false;
+			psql_error("unexpected PQresultStatus: %d\n", restat);
+			break;
+	}
+
+	if (success)
+	{
+		char   *ntuples = PQcmdTuples(results);
+		SetVariable(pset.vars, "ROW_COUNT", *ntuples ? ntuples : "0");
+		SetVariable(pset.vars, "ERROR", "TRUE");
+	}
+	else
+	{
+		SetVariable(pset.vars, "ROW_COUNT", "0");
+		SetVariable(pset.vars, "ERROR", "FALSE");
+	}
+}
 
 /*
  * SendQuery: send 

[HACKERS] Error-like LOG when connecting with SSL for password authentication

2017-05-22 Thread Michael Paquier
Hi all,

When attempting to connect using password authentication through SSL,
the backend will complain in its log with the following entry before
calling sendAuthRequest(), which asks the client for a password:
LOG:  could not receive data from client: Connection reset by peer

After a short talk with Heikki, it seems that be_tls_read() complains
on SSL_ERROR_ZERO_RETURN, which is documented here:
https://wiki.openssl.org/index.php/Manual:SSL_get_error(3)
The TLS/SSL connection has been closed. If the protocol version is SSL
3.0 or TLS 1.0, this result code is returned only if a closure alert
has occurred in the protocol, i.e. if the connection has been closed
cleanly. Note that in this case SSL_ERROR_ZERO_RETURN does not
necessarily indicate that the underlying transport has been closed.

As this is a clean shutdown of the SSL connection, shouldn't
be_tls_read() return 0 to the caller instead of -1? This would map
with what the non-SSL code path does for raw reads.

This is basically harmless, but the error message is confusing I
think, and there is no equivalent for the non-SSL code path.

Attached is an idea of patch.
Thoughts?
-- 
Michael


ssl-read-commerr.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix a typo in hash.c

2017-05-22 Thread Magnus Hagander
On Mon, May 22, 2017 at 3:36 AM, Masahiko Sawada 
wrote:

> Hi,
>
> Attached patch for $subject.
>
> s/curent/current/
>
>
Applied, thanks!

//Magnus


Re: [HACKERS] Partition-wise join for join between (declaratively) partitioned tables

2017-05-22 Thread Ashutosh Bapat
On Sat, Apr 29, 2017 at 12:37 AM, Robert Haas  wrote:
> On Fri, Apr 28, 2017 at 1:18 AM, Ashutosh Bapat
>  wrote:
>> For two-way join this works and is fairly straight-forward. I am
>> assuming that A an B are base relations and not joins. But making it
>> work for N-way join is the challenge.
>
> I don't think it's much different, is it?  Anyway, I'm going to
> protest if your algorithm for merging bounds takes any more than
> linear time, regardless of what else we decide.
>
>>> Having said that I think we could make this work, I'm starting to
>>> agree with you that it will add more complexity than it's worth.
>>> Needing to keep track of the type of every partition bound
>>> individually seems like a real nuisance, and it's not likely to win
>>> very often because, realistically, people should and generally will
>>> use the same type for the partitioning column in all of the relevant
>>> tables.  So I'm going to revise my position and say it's fine to just
>>> give up on partitionwise join unless the types match exactly, but I
>>> still think we should try to cover the cases where the bounds don't
>>> match exactly but only 1:1 or 1:0 or 0:1 mappings are needed (iow,
>>> optimizations 1 and 2 from your list of 4).  I agree that ganging
>>> partitions (optimization 4 from your list) is not something to tackle
>>> right now.
>>
>> Good. I will have a more enjoyable vacation now.
>
> Phew, what a relief.  :-)
>
>> Do you still want the patition key type to be out of partition scheme?
>> Keeping it there means we match it only once and save it only at a
>> single place. Otherwise, it will have to be stored in RelOptInfo of
>> the partitioned table and match it for every pair of joining
>> relations.
>
> The only reason for removing things from the PartitionScheme was if
> they didn't need to be consistent across all tables.  Deciding that
> the type is one of the things that has to match means deciding it
> should be in the PartitionScheme, not the RelOptInfo.
>

Here's set of patches rebased on latest head.

I spent some time trying to implement partition-wise join when
partition bounds do not match exactly but there's 1:1, 1:0 or 0:1
mapping between partitions. A WIP patch 0017 is included in the set
for the same. The patch is not complete, it doesn't support range
partitions and needs some bugs to be fixed for list partitions. Also
because of the way it crafts partition bounds for a join, it leaks
memory consumed by partition bounds for every pair of joining
relations. I will work on fixing those issues. That patch is pretty
large now. So, I think we will have to commit it separately on top of
basic partition-wise join implementation. But you will see that it has
minimal changes to the basic partition-wise join code.

I rewrote code handling partition keys on the nullable side of the
join. Now we store partition keys from nullable and non-nullable
relations separately. The partition keys from nullable relations are
matched only when the equality operator is strict. This is explained
in details the comments in match_expr_to_partition_keys() and
build_joinrel_partition_info().

Also please note that since last patch set I have paired the
multi-level partition-wise join support patches with single-level
partition-wise join patches providing corresponding functionality.

[1] 
https://www.postgresql.org/message-id/CAFjFpRd9ebX225KhuvYXQRBuk9NrVJfPzHqGPGqpze%2BqvH0xmw%40mail.gmail.com

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


pg_dp_join_patches_v20.tar.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Default Partition for Range

2017-05-22 Thread Ashutosh Bapat
On Mon, May 22, 2017 at 7:27 AM, Beena Emerson  wrote:
> Hello,
>
> Many were in favour of the default partition for tables partitioned by range
> [1].
> Please find attached the WIP patch for the same which can be applied over
> the default_partition_v12.patch.
>
> Syntax: Same as agreed for list:
> CREATE PARTITION  PARTITION OF  DEFAULT;
>
> Default Constraint:
> Negation constraints of all existing partitions.
>
> One case:
> CREATE TABLE range1 (a int, b int) PARTITION by range (a);
> CREATE TABLE range1_1 PARTITION OF range1 FOR VALUES FROM (1) TO (5);
> CREATE TABLE range1_2 PARTITION OF range1 FOR VALUES FROM (7) TO (10);
> CREATE TABLE range1_def PARTITION OF range1 DEFAULT;
> \d+ range1_def
> Table "public.range1_def"
>  Column |  Type   | Collation | Nullable | Default | Storage | Stats target
> | Description
> +-+---+--+-+-+--+-
>  a  | integer |   | not null | | plain   |
> |
>  b  | integer |   |  | | plain   |
> |
> Partition of: range1 DEFAULT
> Partition constraint: (((a < 1) OR (a >= 5)) AND ((a < 7) OR (a >= 10)))

Would it be more readable if this reads as NOT
(constraint_for_partition_1 || constraint_for_partition_2 ||
constraint_for_partition_3)? That way, the code to create
constraint_for_partition_n can be reused and there's high chance that
we will end up with consistent constraints?

>
> It still needs more work:
> 1. Handling addition of new partition after default, insertion of data, few
> more bugs
> 2. Documentation
> 3. Regression tests
>

I think, the default partition support for all the strategies
supporting it should be a single patch since most of the code will be
shared?
-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers