Re: [HACKERS] COPY (... tab completion

2016-01-06 Thread Andreas Karlsson

Hi,

I have an updated patch which uses Matches() rather than TailMatches().

Andreas
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index b5117d4..158b913 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1934,12 +1934,19 @@ psql_completion(const char *text, int start, int end)
 /* COPY */
 
 	/*
-	 * If we have COPY [BINARY] (which you'd have to type yourself), offer
-	 * list of tables (Also cover the analogous backslash command)
+	 * If we have COPY, offer list of tables or "(" (Also cover the analogous
+	 * backslash command).
 	 */
-	else if (Matches1("COPY|\\copy") || Matches2("COPY", "BINARY"))
+	else if (Matches1("COPY|\\copy"))
+		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables,
+   " UNION ALL SELECT '('");
+	/* If we have COPY BINARY, compelete with list of tables */
+	else if (Matches2("COPY", "BINARY"))
 		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, NULL);
-	/* If we have COPY [BINARY] , complete it with "TO" or "FROM" */
+	/* If we have COPY (, complete it with legal commands */
+	else if (TailMatches2("COPY|\\copy", "("))
+		COMPLETE_WITH_LIST4("SELECT", "WITH", "UPDATE", "DELETE");
+	/* If we have COPY|BINARY , complete it with "TO" or "FROM" */
 	else if (Matches2("COPY|\\copy", MatchAny) ||
 			 Matches3("COPY", "BINARY", MatchAny))
 		COMPLETE_WITH_LIST2("FROM", "TO");

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: SET ROLE hook

2016-01-06 Thread Pavel Stehule
Hi

I did a review of this patch.

1. the proposal is clean and there are not any objection against it. I
checked a implementation, and it does exactly same what was proposed.

2. This hook has impact only on SET role to XXX statement, what isn't used
in our critical path, so there are not any performance impact

3. I was able to apply patch cleanly without any problems, warnings.

4. All regress tests was passed

5. there are not tests and documentation, but it is usual for any hook

I have not any objection

I'll mark this patch as ready for commiter

Regards

Pavel


[HACKERS] Regression caused by recent change to initdb?

2016-01-06 Thread Amit Langote
Hi,

I stumbled upon a possibly strange behavior which may be related to recent
initdb changes. For a freshly initdb'd cluster, the following looks fishy:

postgres=# SELECT relname, relnamespace::regnamespace FROM pg_class
WHERE relnamespace != 'pg_catalog'::regnamespace
AND relnamespace != 'pg_toast'::regnamespace
AND relnamespace != 'information_schema'::regnamespace;
   relname|  relnamespace
--+-
 pg_toast_11817   | pg_toast_temp_1
 pg_toast_11817_index | pg_toast_temp_1
 pg_toast_11822   | pg_toast_temp_1
 pg_toast_11822_index | pg_toast_temp_1
 pg_toast_11827   | pg_toast_temp_1
 pg_toast_11827_index | pg_toast_temp_1
 tmp_pg_description   | pg_temp_1
 tmp_pg_shdescription | pg_temp_1
 tmp_pg_collation | pg_temp_1
(10 rows)

These seem to be leftovers of activities of initdb.c's setup_description()
and setup_collaction(). Interestingly, they disappear after performing the
following steps:

0. Stop the server
1. Connect to the database in --single mode, create a temp table, exit.
2. Log back into the database in normal mode and execute the same query.

The behavior seems to be as of commit c4a8812cf (Use just one
standalone-backend session for initdb's post-bootstrap steps). Is this a
regression?

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making tab-complete.c easier to maintain

2016-01-06 Thread Andreas Karlsson

On 01/06/2016 01:13 AM, Michael Paquier wrote:

On Wed, Jan 6, 2016 at 2:03 AM, Tom Lane  wrote:

I've pushed the second patch now.  I made a few adjustments --- notably,
I didn't like the way you'd implemented '*' wildcards, because they
wouldn't have behaved very sanely in combination with '|'.  The case
doesn't come up in the current set of patterns, but we'll likely want it
sometime.


Thanks! Let's consider this project as done then. That was a long way to it...


Thanks to everyone involved in cleaning this up, it is much easier to 
add tab completions now.


Andreas



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Comment typo in namespace.c

2016-01-06 Thread Amit Langote

Hi,

Attached fixes a typo: s/non-exstant/non-existant.

Alternatively, it could be spelled as 'existent' but the patch doesn't.
Nor does it drop the 's' and spell it 'non-extant' which may have been the
original intent.

Thanks,
Amit
diff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c
index 8b105fe..036eb37 100644
--- a/src/backend/catalog/namespace.c
+++ b/src/backend/catalog/namespace.c
@@ -513,7 +513,7 @@ RangeVarGetCreationNamespace(const RangeVar *newRelation)
  * As a side effect, this function acquires AccessShareLock on the target
  * namespace.  Without this, the namespace could be dropped before our
  * transaction commits, leaving behind relations with relnamespace pointing
- * to a no-longer-exstant namespace.
+ * to a no-longer-existant namespace.
  *
  * As a further side-effect, if the select namespace is a temporary namespace,
  * we mark the RangeVar as RELPERSISTENCE_TEMP.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: SET ROLE hook

2016-01-06 Thread Pavel Stehule
Ultimately I would also like to see a general hook available which would
>> fire for all changes to GUC settings, but I think this one is still very
>> useful as it is highly targeted.
>>
>> Comments?
>>
>
> I read discussion in this thread and I am thinking so creating hook is the
> most simple and workable solution.
>
> Personally, I am not sure if missing support SECURE DEFINER function is
> ok. When you do some secure audit, probably any role change can be
> interesting. This situation can be distinguished by another hook parameter.
>

I though more about this topic, and my request was +/- premature
optimization. It can be solved simply in future by different hook when it
be requested. This should not be blocker.

Regards

Pavel


>
> Support of event triggers can be other topic, nice to have it but not
> necessary in first moment.
>
> Regards
>
> Pavel
>
>
>>
>> Thanks,
>>
>> Joe
>>
>> --
>> Crunchy Data - http://crunchydata.com
>> PostgreSQL Support for Secure Enterprises
>> Consulting, Training, & Open Source Development
>>
>>
>


Re: [HACKERS] Optimizer questions

2016-01-06 Thread Konstantin Knizhnik

On 01/06/2016 12:03 PM, David Rowley wrote:

Konstantin, are you thinking of looking into this more, with plans to implement 
code to improve this?



I am not familiar with PostgreSQL optimizer, but I now looking inside its code 
and trying to find a way to fix this problem.
But if there is somebody is familar with optimizer and already knows solution, 
please let me know so that I do no spend time on this.

Actually I expect that most of the logic is already present in optimizer: there 
are several places where it is considered where and how to evaluate tlist.
For example, there are functions use_physical_tlist/disuse_physical_tlist which 
triggers whether the relation targetlist or only those Vars actually needed by 
the query. Not sure that that them are related to this particular case...

So most likely it will be enough to insert just one function call is the proper 
place, but it is necessary to find the function and the place;)



--
 David Rowley http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services




Re: [HACKERS] Function and view to retrieve WAL receiver status

2016-01-06 Thread Michael Paquier
On Wed, Jan 6, 2016 at 3:04 PM, Michael Paquier
 wrote:
> Attached is an updated patch.

Forgot to update rules.out...
-- 
Michael
From 4bc33d1497c302b8669b1f1d9d43f2f806029693 Mon Sep 17 00:00:00 2001
From: Michael Paquier 
Date: Fri, 18 Dec 2015 22:44:21 +0900
Subject: [PATCH] Add system view and function to report WAL receiver activity

---
 doc/src/sgml/monitoring.sgml  |  91 
 src/backend/catalog/system_views.sql  |  16 
 src/backend/replication/walreceiver.c | 154 ++
 src/include/catalog/pg_proc.h |   2 +
 src/include/replication/walreceiver.h |   2 +
 src/test/regress/expected/rules.out   |  13 +++
 6 files changed, 278 insertions(+)

diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index c503636..85459d0 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -301,6 +301,14 @@ postgres   27093  0.0  0.0  30096  2752 ?Ss   11:34   0:00 postgres: ser
  
 
  
+  pg_stat_wal_receiverpg_stat_wal_receiver
+  Only one row, showing statistics about the WAL receiver from
+   that receiver's connected server.
+   See  for details.
+  
+ 
+
+ 
   pg_stat_sslpg_stat_ssl
   One row per connection (regular and replication), showing information about
SSL used on this connection.
@@ -833,6 +841,89 @@ postgres   27093  0.0  0.0  30096  2752 ?Ss   11:34   0:00 postgres: ser
listed; no information is available about downstream standby servers.
   
 
+  
+   pg_stat_wal_receiver View
+   
+
+
+  Column
+  Type
+  Description
+ 
+
+
+   
+
+ pid
+ integer
+ Process ID of the WAL receiver process
+
+
+ status
+ text
+ Activity status of the WAL receiver process
+
+
+ receive_start_lsn
+ pg_lsn
+ First transaction log position used when WAL receiver is
+  started
+
+
+ receive_start_tli
+ integer
+ First timeline number used when WAL receiver is started
+
+
+ received_lsn
+ pg_lsn
+ Last transaction log position already received and flushed to
+  disk, the initial value of this field being the first log position used
+  when WAL receiver is started
+
+
+ received_tli
+ integer
+ Timeline number of last transaction log position received and
+  flushed to disk, the initial value of this field being the timeline
+  number of the first log position used when WAL receiver is started
+ 
+
+
+ last_msg_send_time
+ timestamp with time zone
+ Send time of last message received from origin WAL sender
+
+
+ last_msg_receipt_time
+ timestamp with time zone
+ Receipt time of last message received from origin WAL sender
+
+
+ latest_end_lsn
+ pg_lsn
+ Last transaction log position reported to origin WAL sender
+
+
+ latest_end_time
+ timestamp with time zone
+ Time of last transaction log position reported to origin WAL sender
+
+
+ slot_name
+ text
+ Replication slot name used by this WAL receiver
+
+   
+   
+  
+
+  
+   The pg_stat_wal_receiver view will contain only
+   one row, showing statistics about the WAL receiver from that receiver's
+   connected server.
+  
+
   
pg_stat_ssl View

diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 2052afd..506a884 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -662,6 +662,22 @@ CREATE VIEW pg_stat_replication AS
 WHERE S.usesysid = U.oid AND
 S.pid = W.pid;
 
+CREATE VIEW pg_stat_wal_receiver AS
+SELECT
+s.pid,
+s.status,
+s.receive_start_lsn,
+s.receive_start_tli,
+s.received_lsn,
+s.received_tli,
+s.last_msg_send_time,
+s.last_msg_receipt_time,
+s.latest_end_lsn,
+s.latest_end_time,
+s.slot_name
+FROM pg_stat_get_wal_receiver() s
+WHERE s.pid IS NOT NULL;
+
 CREATE VIEW pg_stat_ssl AS
 SELECT
 S.pid,
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index 81f1529..7b36e02 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -46,9 +46,12 @@
 #include 
 #include 
 
+#include "access/htup_details.h"
 #include "access/timeline.h"
 #include "access/transam.h"
 #include "access/xlog_internal.h"
+#include "catalog/pg_type.h"
+#include "funcapi.h"
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
@@ -57,7 +60,9 @@
 #include "storage/ipc.h"
 #include "storage/pmsignal.h"
 #include "storage/procarray.h"
+#include "utils/builtins.h"
 #include "utils/guc.h"
+#include "utils/pg_lsn.h"
 #include 

Re: [HACKERS] Optimizer questions

2016-01-06 Thread David Rowley
On 6 January 2016 at 13:13, Alexander Korotkov 
wrote:

> On Wed, Jan 6, 2016 at 12:08 AM, Tom Lane  wrote:
>
>> konstantin knizhnik  writes:
>> > 1. The cost compared in grouping_planner doesn't take in account price
>> of get_authorized_users - it is not changed when I am altering function
>> cost. Is it correct behavior?
>>
>> The general problem of accounting for tlist eval cost is not handled very
>> well now, but especially not with respect to the idea that different paths
>> might have different tlist costs.  I'm working on an upper-planner rewrite
>> which should make this better, or at least make it practical to make it
>> better.
>>
>
> Hmm... Besides costing it would be nice to postpone calculation of
> expensive tlist functions after LIMIT.
>

I'd agree that it would be more than the costings that would need to be
improved here.

The most simple demonstration of the problem I can think of is, if I apply
the following:

diff --git a/src/backend/utils/adt/int.c b/src/backend/utils/adt/int.c
index 29d92a7..2ec9822 100644
--- a/src/backend/utils/adt/int.c
+++ b/src/backend/utils/adt/int.c
@@ -641,6 +641,8 @@ int4pl(PG_FUNCTION_ARGS)

result = arg1 + arg2;

+   elog(NOTICE, "int4pl(%d, %d)", arg1,arg2);
+
/*
 * Overflow check.  If the inputs are of different signs then their
sum
 * cannot overflow.  If the inputs are of the same sign, their sum
had

Then do:

create table a (b int);
insert into a select generate_series(1,10);
select b+b as bb from a order by b limit 1;
NOTICE:  int4pl(1, 1)
NOTICE:  int4pl(2, 2)
NOTICE:  int4pl(3, 3)
NOTICE:  int4pl(4, 4)
NOTICE:  int4pl(5, 5)
NOTICE:  int4pl(6, 6)
NOTICE:  int4pl(7, 7)
NOTICE:  int4pl(8, 8)
NOTICE:  int4pl(9, 9)
NOTICE:  int4pl(10, 10)
 bb

  2
(1 row)

We can see that int4pl() is needlessly called 9 times. Although, I think
this does only apply to queries with LIMIT. I agree that it does seem like
an interesting route for optimisation.

It seems worthwhile to investigate how we might go about improving this so
that the evaluation of the target list happens after LIMIT, at least for
the columns which are not required before LIMIT.

Konstantin, are you thinking of looking into this more, with plans to
implement code to improve this?

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Improved tab completion for FDW DDL

2016-01-06 Thread Andreas Karlsson

On 01/04/2016 07:26 AM, Michael Paquier wrote:

You may want to use Matches() instead of TailMatches() for performance reasons.


Here is an updated version which uses Matches().

Andreas
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 4d2bee1..34dabc5 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1544,7 +1544,7 @@ psql_completion(const char *text, int start, int end)
 		COMPLETE_WITH_LIST3("MINVALUE", "MAXVALUE", "CYCLE");
 	/* ALTER SERVER  */
 	else if (Matches3("ALTER", "SERVER", MatchAny))
-		COMPLETE_WITH_LIST3("VERSION", "OPTIONS", "OWNER TO");
+		COMPLETE_WITH_LIST4("VERSION", "OPTIONS", "OWNER TO", "RENAME TO");
 	/* ALTER SYSTEM SET, RESET, RESET ALL */
 	else if (Matches2("ALTER", "SYSTEM"))
 		COMPLETE_WITH_LIST2("SET", "RESET");
@@ -2095,8 +2095,12 @@ psql_completion(const char *text, int start, int end)
 		COMPLETE_WITH_LIST3("MINVALUE", "MAXVALUE", "CYCLE");
 
 /* CREATE SERVER  */
+	/* Complete "CREATE SERVER " with "FOREIGN DATA WRAPPER" */
 	else if (Matches3("CREATE", "SERVER", MatchAny))
 		COMPLETE_WITH_LIST3("TYPE", "VERSION", "FOREIGN DATA WRAPPER");
+	/* Complete "CREATE SERVER  FOREIGN DATA WRAPPER" with fdw name */
+	else if (Matches6("CREATE", "SERVER", MatchAny, "FOREIGN", "DATA", "WRAPPER"))
+		COMPLETE_WITH_QUERY(Query_for_list_of_fdws);
 
 /* CREATE TABLE --- is allowed inside CREATE SCHEMA, so use TailMatches */
 	/* Complete "CREATE TEMP/TEMPORARY" with the possible temp objects */
@@ -2803,6 +2807,8 @@ psql_completion(const char *text, int start, int end)
 		COMPLETE_WITH_QUERY(Query_for_list_of_user_mappings);
 	else if (Matches5("CREATE|ALTER|DROP", "USER", "MAPPING", "FOR", MatchAny))
 		COMPLETE_WITH_CONST("SERVER");
+	else if (Matches7("CREATE|ALTER", "USER", "MAPPING", "FOR", MatchAny, "SERVER", MatchAny))
+		COMPLETE_WITH_CONST("OPTIONS (");
 
 /*
  * VACUUM [ FULL | FREEZE ] [ VERBOSE ] [ table ]

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inconsistent error handling in START_REPLICATION command

2016-01-06 Thread Shulgin, Oleksandr
On Tue, Jan 5, 2016 at 11:35 AM, Shulgin, Oleksandr <
oleksandr.shul...@zalando.de> wrote:

> On Tue, Jan 5, 2016 at 10:39 AM, Shulgin, Oleksandr <
> oleksandr.shul...@zalando.de> wrote:
>
>>
>> I didn't look in the code yet, but if someone knows off top of the head
>> the reason to this behavior, I'd be glad to learn it.
>>
>
> Looks like the attached patch fixes it for me:
>

Should I add it to the current commitfest?  It seems to me, the patch is
trivial and unambiguous, so that might be an overkill...

Thanks.
--
Alex


Re: [HACKERS] Regression caused by recent change to initdb?

2016-01-06 Thread Amit Langote
On 2016/01/06 17:32, Amit Langote wrote:
> Hi,
> 
> I stumbled upon a possibly strange behavior which may be related to recent
> initdb changes. For a freshly initdb'd cluster, the following looks fishy:
> 
> postgres=# SELECT relname, relnamespace::regnamespace FROM pg_class
> WHERE relnamespace != 'pg_catalog'::regnamespace
> AND relnamespace != 'pg_toast'::regnamespace
> AND relnamespace != 'information_schema'::regnamespace;
>relname|  relnamespace
> --+-
>  pg_toast_11817   | pg_toast_temp_1
>  pg_toast_11817_index | pg_toast_temp_1
>  pg_toast_11822   | pg_toast_temp_1
>  pg_toast_11822_index | pg_toast_temp_1
>  pg_toast_11827   | pg_toast_temp_1
>  pg_toast_11827_index | pg_toast_temp_1
>  tmp_pg_description   | pg_temp_1
>  tmp_pg_shdescription | pg_temp_1
>  tmp_pg_collation | pg_temp_1
> (10 rows)
> 
> These seem to be leftovers of activities of initdb.c's setup_description()
> and setup_collaction().

I noticed these leftovers are not present in template1.

Thanks,
Amit





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-01-06 Thread Rushabh Lathia
I started looking at updated patch and its definitely iked the new
approach.

Patch passing the mandatory checks:

1) Patch applied cleanly using patch -p1 command
2) Source got compiled cleanly
3) make check, running cleanly


Explain output for the remote table and inheritance relations looks better
then earlier version of patch.

-- Inheritance foreign relation
postgres=# explain (ANALYZE, VERBOSE) update fp set a = 20;
  QUERY
PLAN
---
 Update on public.fp  (cost=100.00..767.60 rows=10920 width=6) (actual
time=1.000..1.000 rows=0 loops=1)
   Foreign Update on public.fp
   Foreign Update on public.fc1
   Foreign Update on public.fc2
   Foreign Update on public.fc3
   ->  Foreign Update on public.fp  (cost=100.00..191.90 rows=2730 width=6)
(actual time=0.493..0.493 rows=0 loops=1)
 Remote SQL: UPDATE public.p SET a = 20
   ->  Foreign Update on public.fc1  (cost=100.00..191.90 rows=2730
width=6) (actual time=0.177..0.177 rows=0 loops=1)
 Remote SQL: UPDATE public.c1 SET a = 20
   ->  Foreign Update on public.fc2  (cost=100.00..191.90 rows=2730
width=6) (actual time=0.163..0.163 rows=0 loops=1)
 Remote SQL: UPDATE public.c2 SET a = 20
   ->  Foreign Update on public.fc3  (cost=100.00..191.90 rows=2730
width=6) (actual time=0.158..0.158 rows=0 loops=1)
 Remote SQL: UPDATE public.c3 SET a = 20
 Planning time: 0.228 ms
 Execution time: 1.359 ms
(15 rows)

-- Foreign table update
postgres=# explain (ANALYZE, VERBOSE) update ft1 set c8 = 'updated'  where
c1 = '200';
  QUERY
PLAN
--
 Update on public.ft1  (cost=100.00..116.13 rows=2 width=144) (actual
time=0.485..0.485 rows=0 loops=1)
   ->  Foreign Update on public.ft1  (cost=100.00..116.13 rows=2 width=144)
(actual time=0.483..0.483 rows=0 loops=1)
 Remote SQL: UPDATE public.t1 SET c8 = 'updated   '::character(10)
WHERE ((c1 = 200))
 Planning time: 0.158 ms
 Execution time: 0.786 ms
(5 rows)


-- Explain output for the returning clause:
postgres=# explain (ANALYZE, VERBOSE) update ft1 set c8 = 'updated'  where
c1 = '200' returning c2;
  QUERY
PLAN
--
 Update on public.ft1  (cost=100.00..116.13 rows=2 width=144) (actual
time=0.516..0.516 rows=0 loops=1)
   Output: c2
   ->  Foreign Update on public.ft1  (cost=100.00..116.13 rows=2 width=144)
(actual time=0.514..0.514 rows=0 loops=1)
 Remote SQL: UPDATE public.t1 SET c8 = 'updated   '::character(10)
WHERE ((c1 = 200)) RETURNING c2
 Planning time: 0.172 ms
 Execution time: 0.938 ms
(6 rows)


-- Explain output when returning clause is not pushdown safe:
postgres=# explain (ANALYZE, VERBOSE) update ft1 set c8 = 'updated'  where
c1 = '200' returning local_func(20);
  QUERY
PLAN
--
 Update on public.ft1  (cost=100.00..116.13 rows=2 width=144) (actual
time=0.364..0.364 rows=0 loops=1)
   Output: local_func(20)
   ->  Foreign Update on public.ft1  (cost=100.00..116.13 rows=2 width=144)
(actual time=0.363..0.363 rows=0 loops=1)
 Remote SQL: UPDATE public.t1 SET c8 = 'updated   '::character(10)
WHERE ((c1 = 200))
 Planning time: 0.142 ms
 Execution time: 0.623 ms
(6 rows)

-- Explain output with PREPARE:
postgres=# explain (ANALYZE, VERBOSE) execute ftupdate(20);
  QUERY
PLAN
--
 Update on public.ft1  (cost=100.00..116.13 rows=2 width=144) (actual
time=0.712..0.712 rows=0 loops=1)
   ->  Foreign Update on public.ft1  (cost=100.00..116.13 rows=2 width=144)
(actual time=0.711..0.711 rows=1 loops=1)
 Remote SQL: UPDATE public.t1 SET c8 = 'updated   '::character(10)
WHERE ((c1 = 20))
 Execution time: 1.001 ms
(4 rows)


With the initial look and test overall things looking great, I am still
reviewing the code changes but here are few early doubts/questions:

.) What the need of following change ?

@@ -833,9 +833,6 @@ appendWhereClause(StringInfo buf,
int nestlevel;
ListCell   *lc;

-   if (params)
-   *params = NIL;  /* initialize result list to empty */
-
/* Set up context struct for recursion */
context.root = root;
context.foreignrel = baserel;
@@ -971,6 +968,63 @@ deparseUpdateSql(StringInfo buf, PlannerInfo *root,
 }


.) When Tom Lane and Stephen Frost suggested getting the core code 

Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-01-06 Thread Thom Brown
On 25 December 2015 at 10:00, Etsuro Fujita  wrote:
> On 2015/12/24 4:34, Robert Haas wrote:
>>
>> On Wed, Dec 23, 2015 at 5:50 AM, Rushabh Lathia
>>  wrote:
>>>
>>> +1.
>>>
>>> I like idea of separate FDW API for the DML Pushdown. Was thinking can't
>>> we
>>> can re-use the  IterateForeignScan(ForeignScanState *node) rather then
>>> introducing IterateDMLPushdown(ForeignScanState *node) new API ?
>
>
>> Yeah, I think we need to ask ourselves what advantage we're getting
>> out of adding any new core APIs.  Marking the scan as a pushed-down
>> update or delete has some benefit in terms of making the information
>> visible via EXPLAIN, but even that's a pretty thin benefit.  The
>> iterate method seems to just complicate the core code without any
>> benefit at all.  More generally, there is very, very little code in
>> this patch that accomplishes anything that could not be done just as
>> well with the existing methods.  So why are we even doing these core
>> changes?
>
>
> From the FDWs' point of view, ISTM that what FDWs have to do for
> IterateDMLPushdown is quite different from what FDWs have to do for
> IterateForeignScan; eg, IterateDMLPushdown must work in accordance with
> presence/absence of a RETURNING list.  (In addition to that,
> IterateDMLPushdown has been designed so that it must make the scan tuple
> available to later RETURNING projection in nodeModifyTable.c.)  So, I think
> that it's better to FDWs to add separate APIs for the DML pushdown, making
> the FDW code much simpler.  So based on that idea, I added the postgres_fdw
> changes to the patch.  Attached is an updated version of the patch, which is
> still WIP, but I'd be happy if I could get any feedback.
>
>> Tom seemed to think that we could centralize some checks in the core
>> code, say, related to triggers, or to LIMIT.  But there's nothing like
>> that in this patch, so I'm not really understanding the point.
>
>
> For the trigger check, I added relation_has_row_level_triggers.  I placed
> that function in postgres_fdw.c in the updated patch, but I think that by
> placing that function in the core, FDWs can share that function.  As for the
> LIMIT, I'm not sure we can do something about that.
>
> I think the current design allows us to handle a pushed-down update on a
> join, ie, "UPDATE foo ... FROM bar ..." where both foo and bar are remote,
> which was Tom's concern, but I'll leave that for another patch.  Also, I
> think the current design could also extend to push down INSERT .. RETURNING
> .., but I'd like to leave that for future work.
>
> I'll add this to the next CF.

I've run into an issue:

*# UPDATE master_customers SET id = 22 WHERE id = 16 RETURNING
tableoid::regclass;
ERROR:
CONTEXT:  Remote SQL command: UPDATE public.customers SET id = 22
WHERE ((id = 16)) RETURNING NULL

However, this works:

*# UPDATE master_customers SET id = 22 WHERE id = 16 RETURNING
tableoid::regclass, *;
tableoid | id | name  |company| registered_date |
expiry_date | active | status  | account_level
-++---+---+-+-++-+---
 local_customers | 22 | Bruce | Jo's Cupcakes | 2015-01-15  |
2017-01-14  | t  | running | basic
(1 row)

In this example, "local_customers" inherits from the remote table
"public"."customers", which inherits again from the local table
"master_customers"

Same issue with DELETE of course, and the ::regclass isn't important here.

Thom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add schema-qualified relnames in constraint error messages.

2016-01-06 Thread Shulgin, Oleksandr
On Wed, Jan 6, 2016 at 5:06 AM, Jim Nasby  wrote:

> On 1/5/16 9:16 PM, Tom Lane wrote:
>
>> Jim Nasby  writes:
>>
>>> FWIW, I suspect very few people know about the verbosity setting (I
>>> didn't until a few months ago...) Maybe psql should hint about it the
>>> first time an error is reported in a session.
>>>
>>
>> Actually, what'd be really handy IMO is something to regurgitate the
>> most recent error in verbose mode, without making a permanent session
>> state change.  Something like
>>
>> regression=# insert into bar values(1);
>> ERROR:  insert or update on table "bar" violates foreign key constraint
>> "bar_f1_fkey"
>> DETAIL:  Key (f1)=(1) is not present in table "foo".
>> regression=# \saywhat
>> ERROR:  23503: insert or update on table "bar" violates foreign key
>> constraint "bar_f1_fkey"
>> DETAIL:  Key (f1)=(1) is not present in table "foo".
>> SCHEMA NAME:  public
>> TABLE NAME:  bar
>> CONSTRAINT NAME:  bar_f1_fkey
>> LOCATION:  ri_ReportViolation, ri_triggers.c:3326
>> regression=#
>>
>> Not sure how hard that would be to do within psql's current structure.
>>
>
> At first glance, it looks like it just means changing AcceptResult() to
> use PQresultErrorField in addition to PQresultErrorMessage, and stuffing
> the results somewhere. And of course adding \saywhat to the psql parser,
> but maybe someone more versed in psql code can verify that.
>
> If it is that simple, looks like another good beginner hacker task. :)


Sorry, I couldn't resist it: I was too excited to learn such option
existed. :-)

Please find attached a POC patch, using \errverbose for the command name.
Unfortunately, I didn't see a good way to contain the change in psql only
and had to change libpq, adding new interface PQresultBuildErrorMessage().
Also, what I don't like very much is that we need to keep track of the last
result from last SendQuery() in psql's pset.  So in my view this is sort of
a dirty hack that works nonetheless.

Cheers!
--
Alex
From 402866a6899791e7f410ed747e3b0018eed717e0 Mon Sep 17 00:00:00 2001
From: Oleksandr Shulgin 
Date: Wed, 6 Jan 2016 14:58:17 +0100
Subject: [PATCH] POC: \errverbose in psql

---
 src/bin/psql/command.c  |  20 ++
 src/bin/psql/common.c   |   6 +-
 src/bin/psql/settings.h |   1 +
 src/bin/psql/startup.c  |   1 +
 src/interfaces/libpq/exports.txt|   1 +
 src/interfaces/libpq/fe-exec.c  |  21 ++
 src/interfaces/libpq/fe-protocol3.c | 131 +++-
 src/interfaces/libpq/libpq-fe.h |   1 +
 src/interfaces/libpq/libpq-int.h|   2 +
 9 files changed, 122 insertions(+), 62 deletions(-)

diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 9750a5b..9ae5ae5 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -822,6 +822,24 @@ exec_command(const char *cmd,
 		}
 	}
 
+	/* \errverbose -- display verbose error message from last result */
+	else if (strcmp(cmd, "errverbose") == 0)
+	{
+		char	   *errmsg;
+
+		if (pset.last_result && PQresultStatus(pset.last_result) == PGRES_FATAL_ERROR)
+		{
+			/* increase error verbosity for a moment */
+			PQsetErrorVerbosity(pset.db, PQERRORS_VERBOSE);
+			errmsg = PQresultBuildErrorMessage(pset.db, pset.last_result);
+			PQsetErrorVerbosity(pset.db, pset.verbosity);
+
+			psql_error("%s", errmsg);
+
+			PQfreemem(errmsg);
+		}
+	}
+
 	/* \f -- change field separator */
 	else if (strcmp(cmd, "f") == 0)
 	{
@@ -1865,6 +1883,8 @@ do_connect(char *dbname, char *user, char *host, char *port)
 			{
 PQfinish(o_conn);
 pset.db = NULL;
+PQclear(pset.last_result);
+pset.last_result = NULL;
 			}
 		}
 
diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c
index 2cb2e9b..85658e6 100644
--- a/src/bin/psql/common.c
+++ b/src/bin/psql/common.c
@@ -315,6 +315,8 @@ CheckConnection(void)
 			psql_error("Failed.\n");
 			PQfinish(pset.db);
 			pset.db = NULL;
+			PQclear(pset.last_result);
+			pset.last_result = NULL;
 			ResetCancelConn();
 			UnsyncVariables();
 		}
@@ -1154,7 +1156,9 @@ SendQuery(const char *query)
 		}
 	}
 
-	PQclear(results);
+	/* XXX: can we have PQclear that would free everything except errfields? */
+	PQclear(pset.last_result);
+	pset.last_result = results;
 
 	/* Possible microtiming output */
 	if (pset.timing)
diff --git a/src/bin/psql/settings.h b/src/bin/psql/settings.h
index 20a6470..8b88fbb 100644
--- a/src/bin/psql/settings.h
+++ b/src/bin/psql/settings.h
@@ -80,6 +80,7 @@ enum trivalue
 typedef struct _psqlSettings
 {
 	PGconn	   *db;/* connection to backend */
+	PGresult   *last_result;	/* last result from running a query, if any */
 	int			encoding;		/* client_encoding */
 	FILE	   *queryFout;		/* where to send the query results */
 	bool		queryFoutPipe;	/* queryFout is from a popen() */
diff --git a/src/bin/psql/startup.c b/src/bin/psql/startup.c
index 6916f6f..7586ee81 100644
--- 

Re: [HACKERS] Comment typo in namespace.c

2016-01-06 Thread Amit Langote
On Wed, Jan 6, 2016 at 11:51 PM, Tom Lane  wrote:
> Amit Langote  writes:
>> Attached fixes a typo: s/non-exstant/non-existant.
>> Alternatively, it could be spelled as 'existent' but the patch doesn't.
>
> "non-existant" is flat wrong, so if we're going to fix typos, let's
> fix them to actually be English.

So, non-existent? non-extant? I seems to me like an 's' accidentally
sneaked in when the author of the comment tried to write the latter
spelling. But the former sounds more familiar (at least to me).

Thanks,
Amit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Comment typo in namespace.c

2016-01-06 Thread Tom Lane
Amit Langote  writes:
> Attached fixes a typo: s/non-exstant/non-existant.
> Alternatively, it could be spelled as 'existent' but the patch doesn't.

"non-existant" is flat wrong, so if we're going to fix typos, let's
fix them to actually be English.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improving replay of XLOG_BTREE_VACUUM records

2016-01-06 Thread Alvaro Herrera
Vladimir Borodin wrote:

> There are situations in which vacuuming big btree index causes stuck
> in WAL replaying on hot standby servers for quite a long time. I’ve
> described the problem in more details in this thread [0]. Below in
> that thread Kevin Grittner proposed a good way for improving btree
> scans so that btree vacuuming logic could be seriously simplified.
> Since I don’t know when that may happen I’ve done a patch that makes
> some improvement right now. If Kevin or someone else would expand [1]
> for handling all types of btree scans, I suppose, my patch could be
> thrown away and vacuuming logic should be strongly rewritten.

You submitted this patch in May 2015 -- and 7 months later, Simon came
up with another patch that's supposed to fix the underlying problem, so
that this shouldn't be a problem anymore.

Would you please have a look at Simon's patch, in particular verify
whether it solves the performance dip in your testing environment?
https://www.postgresql.org/message-id/CANP8%2BjJuyExr1HnTAdZraWsWkfc-octhug7YPtzPtJcYbyi4pA%40mail.gmail.com
(Note there's an updated patch a few emails down the thread.)

If it seems to fix the problem for you, I think we should mark yours
rejected and just apply Simon's.

Thanks!

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Comment typo in namespace.c

2016-01-06 Thread Tom Lane
Amit Langote  writes:
> On Wed, Jan 6, 2016 at 11:51 PM, Tom Lane  wrote:
>> "non-existant" is flat wrong, so if we're going to fix typos, let's
>> fix them to actually be English.

> So, non-existent? non-extant? I seems to me like an 's' accidentally
> sneaked in when the author of the comment tried to write the latter
> spelling. But the former sounds more familiar (at least to me).

"existent" is a word according to my dictionary, so "non-existent"
is fine.  "extant" is also a word but it's less common and doesn't
really mean the same thing --- it carries a connotation of "still
in existence, surviving".  So you might say "Stonebraker's papers
about Postgres from the '80s are still extant".  "Existent" just
means "in existence" without any particular implication about time
passing, so it's probably what is meant in most cases.

(Actually, in the particular context here, I guess "extant" would
be sensible, but it would be rather hi-falutin' usage.  I'd go
with "existent".)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add numeric_trim(numeric)

2016-01-06 Thread Dean Rasheed
On 27 December 2015 at 07:11, Pavel Stehule  wrote:
>> 2015-11-19 3:58 GMT+01:00 Marko Tiikkaja :
>>> Here's a patch for the second function suggested in
>>> 5643125e.1030...@joh.to.
>>>
> So I am sending a review of this patch.
>

I took a quick look at this too.

It seems like a useful function to have, but perhaps it should just be
called trim() rather than numeric_trim(), for consistency with the
names of the other numeric functions, which don't start with
"numeric_".


I found a bug in the dscale calculation:

select numeric_trim(1e100);
ERROR:  value overflows numeric format

The problem is that the trailing zeros are already stripped off, and
so the computation

dscale = (arg.ndigits - arg.weight - 1) * DEC_DIGITS;

results in a negative dscale, which needs to be guarded against.


Actually I found the implementation a little confusing, partly because
the first comment doesn't really match the line of code that follows
it, and partly because of the complexity of the loop, the macros and
working out all the exit conditions from the loop. A much simpler
implementation would be to first call strip_var(), and then you'd only
need to inspect the final digit (known to be non-zero). In
pseudo-code, it might look a little like the following:

init_var_from_num()
strip_var()
if ndigits > 0:
dscale = (arg.ndigits - arg.weight - 1) * DEC_DIGITS
if dscale < 0:
dscale = 0
if dscale > 0:
// Here we know the last digit is non-zero and that it is to the
// right of the decimal point, so inspect it and adjust dscale
// (reducing it by up to 3) to trim any remaining zero decimal
// digits
dscale -= ...
else:
dscale = 0

This then only has to test for non-zero decimal digits in the final
base-NBASE digit, rather than looping over digits from the right. In
addition, it removes the need to do the following at the start:

/* for simplicity in the loop below, do full NBASE digits at a time */
dscale = ((arg.dscale + DEC_DIGITS - 1) / DEC_DIGITS) * DEC_DIGITS;

which is the comment I found confusing because doesn't really match up
with what that line of code is doing.

Also, it then won't be necessary to do

/* If it's zero, normalize the sign and weight */
if (ndigits == 0)
{
arg.sign = NUMERIC_POS;
arg.weight = 0;
arg.dscale = 0;
}

because strip_var() will have taken care of that.

In fact, in most (all?) cases, the equivalent of strip_var() will have
already been called by whatever produced the input Numeric, so it
won't actually have any work to do, but it will fall through very
quickly in those cases.


One final comment -- in the regression tests the quotes around all the
numbers are unnecessary.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add numeric_trim(numeric)

2016-01-06 Thread Pavel Stehule
Hi

2016-01-06 16:21 GMT+01:00 Dean Rasheed :

> On 27 December 2015 at 07:11, Pavel Stehule 
> wrote:
> >> 2015-11-19 3:58 GMT+01:00 Marko Tiikkaja :
> >>> Here's a patch for the second function suggested in
> >>> 5643125e.1030...@joh.to.
> >>>
> > So I am sending a review of this patch.
> >
>
> I took a quick look at this too.
>
> It seems like a useful function to have, but perhaps it should just be
> called trim() rather than numeric_trim(), for consistency with the
> names of the other numeric functions, which don't start with
> "numeric_".
>
>
> I found a bug in the dscale calculation:
>
> select numeric_trim(1e100);
> ERROR:  value overflows numeric format
>

my oversight :(



>
> The problem is that the trailing zeros are already stripped off, and
> so the computation
>
> dscale = (arg.ndigits - arg.weight - 1) * DEC_DIGITS;
>
> results in a negative dscale, which needs to be guarded against.
>
>
> Actually I found the implementation a little confusing, partly because
> the first comment doesn't really match the line of code that follows
> it, and partly because of the complexity of the loop, the macros and
> working out all the exit conditions from the loop. A much simpler
> implementation would be to first call strip_var(), and then you'd only
> need to inspect the final digit (known to be non-zero). In
> pseudo-code, it might look a little like the following:
>
> init_var_from_num()
> strip_var()
> if ndigits > 0:
> dscale = (arg.ndigits - arg.weight - 1) * DEC_DIGITS
> if dscale < 0:
> dscale = 0
> if dscale > 0:
> // Here we know the last digit is non-zero and that it is to
> the
> // right of the decimal point, so inspect it and adjust dscale
> // (reducing it by up to 3) to trim any remaining zero decimal
> // digits
> dscale -= ...
> else:
> dscale = 0
>
> This then only has to test for non-zero decimal digits in the final
> base-NBASE digit, rather than looping over digits from the right. In
> addition, it removes the need to do the following at the start:
>
> /* for simplicity in the loop below, do full NBASE digits at a time */
> dscale = ((arg.dscale + DEC_DIGITS - 1) / DEC_DIGITS) * DEC_DIGITS;
>
> which is the comment I found confusing because doesn't really match up
> with what that line of code is doing.
>
> Also, it then won't be necessary to do
>
> /* If it's zero, normalize the sign and weight */
> if (ndigits == 0)
> {
> arg.sign = NUMERIC_POS;
> arg.weight = 0;
> arg.dscale = 0;
> }
>
> because strip_var() will have taken care of that.
>
> In fact, in most (all?) cases, the equivalent of strip_var() will have
> already been called by whatever produced the input Numeric, so it
> won't actually have any work to do, but it will fall through very
> quickly in those cases.
>
>
> One final comment -- in the regression tests the quotes around all the
> numbers are unnecessary.
>

so I changed status of this patch to "waiting on author"

Thank you

Pavel


>
> Regards,
> Dean
>


Re: [HACKERS] Regression caused by recent change to initdb?

2016-01-06 Thread Tom Lane
Amit Langote  writes:
> On 2016/01/06 17:32, Amit Langote wrote:
>> I stumbled upon a possibly strange behavior which may be related to recent
>> initdb changes. For a freshly initdb'd cluster, the following looks fishy:
>> ...
>> These seem to be leftovers of activities of initdb.c's setup_description()
>> and setup_collaction().

> I noticed these leftovers are not present in template1.

Ah, right: they get deleted from template1 correctly when the
initdb-driven session shuts down.  But because of the merger into a single
session, they're still there at the instant that we clone template1 into
template0 and postgres databases, and there is nothing to remove them from
there.

The minimum-change way to deal with it would be to explicitly DROP those
tables when we're done with them.

A possibly slightly less fragile answer is to run two sessions, the
second of which *only* processes the DB copying steps.

Neither of these answers seems all that clean to me...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Commitfest app

2016-01-06 Thread Magnus Hagander
Hi!

I'm about to start some upgrades on the machine running
commitfest.postgresql.org. The app itself won't change, but the underlying
OS and webserver will, so there will be some disruptions of service and
some strange and scary error messages for a while.

Hopefully it will take less than an hour, but I'll let you know when it's
done as it might be a bit longer.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Additional role attributes && superuser review

2016-01-06 Thread Stephen Frost
Robert, Noah,

I just wanted to start off by saying thank you for taking the time read
and comment with your thoughts on this concept.  I was a bit frustrated
about it feeling rather late, but appreciate the comments which have
been made as they've certainly been constructive.

* Robert Haas (robertmh...@gmail.com) wrote:
> On Mon, Jan 4, 2016 at 4:56 PM, Stephen Frost  wrote:
> >> First, it's not really going to matter to users very much whether the
> >> command to enable one of these features is a single GRANT command or a
> >> short sequence of GRANT commands executed one after another.  So even
> >> if we don't have roles that bundle together multiple permissions, you
> >> will still be able to do this; you just might need more than one
> >> command.  Probably, I'm guessing, not very many commands, but more
> >> than one.
> >
> > I disagree.  I find that it does make a difference to users to be using
> > a well documented and clear approach over one which involves fiddling
> > with the individual pieces to get everything to work, and if a new
> > function comes along that is useful for backup users, that would have to
> > also be granted, even if it would clearly be useful to a backup role.
> 
> How is that a fair way to characterize the discussion here?  Just
> because you have to execute several SQL commands instead of one
> doesn't turn a "well-documented and clear approach" into something
> that involves "fiddling with individual pieces".  Cutting and pasting
> a sequence of 3 or 4 SQL commands into a psql window is not a lot
> harder than copying and pasting a single one, and does not turn a good
> approach into a shambles.

I was looking at it from a perspective of what we have currently vs.
what the future state with default roles would be.  That's not an
entirely fair characterization.  If we supported ACLs on catalog objects
via pg_dump, then we could add documentation along the lines of "backups
generally need access to functions X, Y, Z, here's an example of how to
create such a role: blah, blah."

Of course, that documentation would likely have to be repeated in the
various backup tools, though it's possible they could tailor those, if
there was something different about their particular tool.

> >> Second, I think it's really unlikely that you're going to maintain
> >> perfect alignment between your predefined roles and the permissions
> >> that third-party tools need over the course of multiple releases.  I
> >> think the chances of that working out are just about zero.  Sure, you
> >> can make the initial set of permissions granted to pg_backup match
> >> exactly what pgbackrest needs, but it's a good bet that one of the six
> >> or eight widely-used backup tools uses something that's not included
> >> in that set.
> >
> > There may be something I've missed with the proposed pg_backup role, but
> > I don't believe you're correct that there isn't a set of privileges
> > which all of those backup tools need and which could be provided through
> > the pg_backup role.
> 
> Well, there's certainly some set of privileges that will make them all
> work.  But it might include more than some of them want.  If you done
> any analysis of this, I have not seen it posted to this thread.

I can certainly work up a formal analysis and submit it for
consideration.

> >> And even if not, it doesn't require very much
> >> imagination to suppose that some tool, maybe pgbackrest or maybe
> >> something else, that comes along over the next few releases will
> >> require some extra permission.  When that happens, will we include add
> >> that permission to the pg_backup role, or not?
> >
> > As I pointed out previously, we've already been doing this with the
> > replication role attribute and I don't recall any complaining about it.
> 
> 1. This doesn't really seem like the same thing.  You're introducing
> something quite new here: these are not role attributes that apply
> only to the role itself, but inheritable role attributes.

This approach started out by adding role attributes to handle these
kinds of rights, but in discussion with Tom and Magnus, iirc (no, I
don't have the specific links or threads, though I have asked Magnus to
take a look at this thread, as he has time), the idea of default roles
seemed better specifically because they would then be inheritable and
granting access could also be delegated.

> 2. I believe that the discussion about what the replication role
> should and should not allow involved considerably more people than
> have discussed any of the specific roles you propose to add here.

I didn't intend to dispute that, but...

> 3. It was clear from the outset that the replication role's basic
> purpose was to be sufficient privilege for a streaming standby and no
> more.  The remit of these roles is a lot less clear, at least to me.

I've certainly intended the intention of these roles to be clear and
documented.  The point I was trying to address above is 

Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-06 Thread Catalin Iacob
On Wed, Jan 6, 2016 at 3:11 AM, Jim Nasby  wrote:
> Which doesn't help anyone, because neither of those provide a list of "hey,
> here's stuff you could do to contribute". The closest we come to that is the
> TODO, which isn't well known and has almost no items for newbies (and the
> newbie items that are there don't offer much advice).
>
> The reason I this matters is because yesterday I posted a task for a new
> hacker with simple guidelines and 24 hours later it was completed[1]. That
> tells me there's people that would love to contribute but don't know how or
> what to contribute.

Jim,

I want to explicitly thank you for your post about that task, I would
like to see more of that. I share the sentiment that there are more
people wanting to contribute but finding it rather hard to find a
starting point. I was (am?) in that position and so far found two ways
out:

1. I looked at the commit fest patches as a list of things to
contribute to (by doing a review which is currently in progress)
2. Robert at some point mentioned in an email "someone could improve
the docs in this patch" so I did that

But I can see that 1. is intimidating to do for new people that might
think "how can you review without ever having looked at the code
before?". Turns out you can and the wiki mentions it explicitly but
it's probably still intimidating for some. And 2. required noticing
Robert's sentence in the middle of a long email thread.

I also think a list of small things suitable for new contributors
would help attracting them. Not all would stick and go on to larger
items but hopefully at least some would.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.

2016-01-06 Thread Sudhir Lonkar-2
Hello,
>Please find attached patch addressing following comments
I have tested this patch.
It is showing empty (null) in phase column of pg_stat_vacuum_progress, when
I switched to a unprivileged user.
In the previous patch, it is showing  in phase
column.

Thanks and Regards,
Sudhir Lonkar



--
View this message in context: 
http://postgresql.nabble.com/PROPOSAL-VACUUM-Progress-Checker-tp5855849p5880544.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Very confusing installcheck behavior with PGXS

2016-01-06 Thread Robert Haas
On Sun, Jan 3, 2016 at 5:22 PM, Jim Nasby  wrote:
> The rule that gets executed if you do `make installcheck` with something
> using PGXS is
>
> pgxs.mk:$(pg_regress_installcheck) $(REGRESS_OPTS) $(REGRESS)
>
> where $(pg_regress_installcheck) is set in Makefile.global.in to
>
>> pg_regress_installcheck = $(top_builddir)/src/test/regress/pg_regress
>> --inputdir=$(srcdir) --bindir='$(bindir)' $(pg_regress_locale_flags)
>> $(EXTRA_REGRESS_OPTS)
>
> The problem here is that in a PGXS make, srcdir is set to '.'[1], and
> --inputdir is specified a second time in REGRESS_OPTS. Normally that works
> OK (for some reason ignoring what's in ./sql), but if you happen to have a
> file in your test/sql directory that matches a file in ./sql, pg_regress
> runs the first file and not the second.

Stupid question time: why in the world would you have that?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pglogical_output - a general purpose logical decoding output plugin

2016-01-06 Thread Greg Stark
On Wed, Jan 6, 2016 at 5:17 PM, Peter Eisentraut  wrote:
> I can't imagine that there is a lot of interest in a replication tool
> where you only get one side of it, no matter how well-designed or
> general it is.

Well I do have another purpose in mind myself so I do appreciate it
being available now and separately.

However you also need to keep in mind that any of these other purposes
will be more or less equally large projects as logical replication.
There's no particular reason to expect one to be able to start up
today and provide feedback faster than the replication code that's
already been under development for ages. I haven't even started on my
pet project and probably won't until February. And I haven't even
thought through the details of it so I don't even know if it'll be a
matter of a few weeks or months or more.

The one project that does seem like it should be fairly fast to get
going and provide a relatively easy way to test the APIs separately
would be an auditing tool. I saw one go by but didn't look into
whether it used logical decoding or another mechanism. One based on
logical decoding does seem like it would let you verify that, for
example, the api gave the right information to filter effectively and
store meta information to index the audit records effectively.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: SET ROLE hook

2016-01-06 Thread Tom Lane
Joe Conway  writes:
> Compared to both of these alternatives, I still feel that the specific
> SET ROLE hook is cleanest and best path forward. If there are no other
> comments or concerns, I will commit this in a day or two.

While I don't think there's any great harm in inventing such a hook, I'm
not sure it's going to be all that useful where placed.  GUC assign_hooks
basically cannot risk throwing errors, which enormously restricts what can
safely be done inside the proposed hook: it would be unwise to do catalog
accesses, for example.  (Which means I think the example usage is broken;
in fact, it's already broken by your note that the code has to be able to
execute in a failed transaction.)

I think a design that was actually somewhat robust would require two
hooks, one at check_role and one at assign_role, wherein the first one
would do any potentially-failing work and package all required info into
a blob that could be passed through to the assign hook.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for JSONB functions for internal representation casting insted text-casting

2016-01-06 Thread Robert Haas
On Sun, Jan 3, 2016 at 3:24 PM, Peter Krauss  wrote:
> The usefulness of  ->>  operator is indisputable, but even with boolean or
> numeric values, with good binary internal representation, it returns JSONB
> value as text data type.
>
> The simple (myJSONB->>'myField')::expectedType is not enough because:
>
> 1) there are no internal optimization,  need two-step casting, first
> bynary-to-text, them text-to-expectedType.
>
> 2) if expectedType is not the expected (in the associated jsonb_typeof),
> generates an error... The ideal "return NULL" convention is not easy to
> implement with usual casting.

Agreed, something like this would be useful.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Better detail logging for password auth failures

2016-01-06 Thread Robert Haas
On Thu, Dec 31, 2015 at 11:09 AM, Edson - Amplosoft Software
 wrote:
> [ quote of an existing message in its entirety ]
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.

Yes, that was extremely brief.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for JSONB functions for internal representation casting insted text-casting

2016-01-06 Thread Pavel Stehule
2016-01-06 19:52 GMT+01:00 Robert Haas :

> On Sun, Jan 3, 2016 at 3:24 PM, Peter Krauss  wrote:
> > The usefulness of  ->>  operator is indisputable, but even with boolean
> or
> > numeric values, with good binary internal representation, it returns
> JSONB
> > value as text data type.
> >
> > The simple (myJSONB->>'myField')::expectedType is not enough because:
> >
> > 1) there are no internal optimization,  need two-step casting, first
> > bynary-to-text, them text-to-expectedType.
> >
> > 2) if expectedType is not the expected (in the associated jsonb_typeof),
> > generates an error... The ideal "return NULL" convention is not easy to
> > implement with usual casting.
>

@2  looks little bit dangerous. If expected number isn't number, but a
text, then I'll expect a exception in all cases.

Pavel


>
> Agreed, something like this would be useful.
>



>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] Very confusing installcheck behavior with PGXS

2016-01-06 Thread Tom Lane
Robert Haas  writes:
> On Sun, Jan 3, 2016 at 5:22 PM, Jim Nasby  wrote:
>> The rule that gets executed if you do `make installcheck` with something
>> using PGXS is
>> 
>> pgxs.mk:$(pg_regress_installcheck) $(REGRESS_OPTS) $(REGRESS)
>> 
>> where $(pg_regress_installcheck) is set in Makefile.global.in to
>> 
> pg_regress_installcheck = $(top_builddir)/src/test/regress/pg_regress
> --inputdir=$(srcdir) --bindir='$(bindir)' $(pg_regress_locale_flags)
> $(EXTRA_REGRESS_OPTS)
>> 
>> The problem here is that in a PGXS make, srcdir is set to '.'[1], and
>> --inputdir is specified a second time in REGRESS_OPTS. Normally that works
>> OK (for some reason ignoring what's in ./sql), but if you happen to have a
>> file in your test/sql directory that matches a file in ./sql, pg_regress
>> runs the first file and not the second.

> Stupid question time: why in the world would you have that?

AFAICS, the pieces we supply (Makefile.global and pgxs.mk) would only
specify --inputdir once.  If there's a second specification being added
to REGRESS_OPTS by your own Makefile, that would override the default,
which seems like entirely sensible behavior.  Maybe I'm misunderstanding
something, but it sounds like you're saying "if I write --inputdir=test
in REGRESS_OPTS, it runs the tests in test/sql not those in ./sql".
Why would you not think that's expected?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: SET ROLE hook

2016-01-06 Thread Joe Conway
On 01/06/2016 02:39 AM, Pavel Stehule wrote:
> I did a review of this patch.
> 
> 1. the proposal is clean and there are not any objection against it. I
> checked a implementation, and it does exactly same what was proposed.
> 
> 2. This hook has impact only on SET role to XXX statement, what isn't
> used in our critical path, so there are not any performance impact
> 
> 3. I was able to apply patch cleanly without any problems, warnings.
> 
> 4. All regress tests was passed
> 
> 5. there are not tests and documentation, but it is usual for any hook
> 
> I have not any objection
> 
> I'll mark this patch as ready for committer

Thanks for the review!

For what it's worth, I did look at Andres' idea and in fact created an
extension available here: https://github.com/pgaudit/set_user

I also looked at implementing equivalent functionality with the
ProcessUtility_hook. That appears feasible but would involve a fair
amount of code duplication from the backend.

Compared to both of these alternatives, I still feel that the specific
SET ROLE hook is cleanest and best path forward. If there are no other
comments or concerns, I will commit this in a day or two.

Thanks,

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Optimizer questions

2016-01-06 Thread Simon Riggs
On 6 January 2016 at 09:03, David Rowley 
wrote:


> It seems worthwhile to investigate how we might go about improving this so
> that the evaluation of the target list happens after LIMIT, at least for
> the columns which are not required before LIMIT.
>

Currently, the FunctionScan executes until it decides to stop. Then later
the LIMIT is applied.

To change this, you'd need to pass down the LIMIT value into the
FunctionScan node, as is already done between Limit/Sort nodes.

On the FunctionScan,  you can set the max_calls value in the
FuncCallContext, but currently that isn't enforced.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] Commitfest app

2016-01-06 Thread Magnus Hagander
On Wed, Jan 6, 2016 at 4:50 PM, Magnus Hagander  wrote:

> Hi!
>
> I'm about to start some upgrades on the machine running
> commitfest.postgresql.org. The app itself won't change, but the
> underlying OS and webserver will, so there will be some disruptions of
> service and some strange and scary error messages for a while.
>
> Hopefully it will take less than an hour, but I'll let you know when it's
> done as it might be a bit longer.
>
>
Upgrade is now complete. If you see any further issues, please let me know
and I'll look into them. None are *expected* anymore.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] parallel joins, and better parallel explain

2016-01-06 Thread Robert Haas
On Mon, Jan 4, 2016 at 8:52 PM, Dilip Kumar  wrote:
> One strange behaviour, after increasing number of processor for VM,
> max_parallel_degree=0; is also performing better.

So, you went from 6 vCPUs to 8?  In general, adding more CPUs means
that there is less contention for CPU time, but if you already had 6
CPUs and nothing else running, I don't know why the backend running
the query would have had a problem getting a whole CPU to itself.  If
you previously only had 1 or 2 CPUs then there might have been some
CPU competition with background processes, but if you had 6 then I
don't know why the max_parallel_degree=0 case got faster with 8.
Anyway, I humbly suggest that this query isn't the right place to put
our attention.  There's no reason why we can't improve things further
in the future, and if it turns out that lots of people have problems
with the cost estimates on multi-batch parallel hash joins, then we
can revise the cost model.  We wouldn't treat a single query where a
non-parallel multi-batch hash join run slower than the costing would
suggest as a reason to revise the cost model for that case, and I
don't think this patch should be held to a higher standard.  In this
particular case, you can easily make the problem go away by tuning
configuration parameters, which seems like an acceptable answer for
people who run into this, unless it becomes clear that this particular
problem is widespread and can't be solved without configuration
changes that introduce other issues at the same time.

Keep in mind that, right now, the patch is currently doing just about
the simplest thing possible, and that's working pretty well.  Anything
we change at this point is going to be in the direction of adding more
complexity than what I've got right now and more than we've got in the
corresponding non-parallel case.  That's OK, but I think it's
appropriate that we only do that if we're pretty sure that those
changes are going to be an improvement.  And I think, by and large,
that we don't have enough perspective on this to know that at this
point.  Until this gets some wider testing, which probably isn't going
to happen very much until this gets committed, it's hard to say which
problems are just things we're artificially creating in the lab and
which ones are going to be annoyances in the real world.  Barring
strenuous objections or discovery of more serious problems with this
than have turned up so far, I'm inclined to go ahead and commit it
fairly soon, so that it attracts some more eyeballs while there's
still a little time left in the development cycle to do something
about whatever the systematic problems turn out to be.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pglogical_output - a general purpose logical decoding output plugin

2016-01-06 Thread Peter Eisentraut
On 12/22/15 4:55 AM, Craig Ringer wrote:
> I'm a touch frustrated by that, as a large part of the point of
> submitting the output plugin separately and in advance of the downstream
> was to get attention for it separately, as its own entity. A lot of
> effort has been put into making this usable for more than just a data
> source for pglogical's replication tools.

I can't imagine that there is a lot of interest in a replication tool
where you only get one side of it, no matter how well-designed or
general it is.  Ultimately, what people will want to do with this is
replicate things, not muse about its design aspects.  So if we're going
to ship a replication solution in PostgreSQL core, we should ship all
the pieces that make the whole system work.

Also, I think there are two kinds of general systems: common core, and
all possible features.  A common core approach could probably be made
acceptable with the argument that anyone will probably want to do things
this way, so we might as well implement it once and give it to people.
In a way, the logical decoding interface is the common core, as we
currently understand it.  But this submission clearly has a lot of
features beyond just the basics, and we could probably go through them
one by one and ask, why do we need this bit?  So that kind of system
will be very hard to review as a standalone submission.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional role attributes && superuser review

2016-01-06 Thread Robert Haas
On Wed, Jan 6, 2016 at 11:13 AM, Stephen Frost  wrote:
> I just wanted to start off by saying thank you for taking the time read
> and comment with your thoughts on this concept.  I was a bit frustrated
> about it feeling rather late, but appreciate the comments which have
> been made as they've certainly been constructive.

Thanks.

>> Well, there's certainly some set of privileges that will make them all
>> work.  But it might include more than some of them want.  If you done
>> any analysis of this, I have not seen it posted to this thread.
>
> I can certainly work up a formal analysis and submit it for
> consideration.

I would be in favor of that.

>> 1. This doesn't really seem like the same thing.  You're introducing
>> something quite new here: these are not role attributes that apply
>> only to the role itself, but inheritable role attributes.
>
> This approach started out by adding role attributes to handle these
> kinds of rights, but in discussion with Tom and Magnus, iirc (no, I
> don't have the specific links or threads, though I have asked Magnus to
> take a look at this thread, as he has time), the idea of default roles
> seemed better specifically because they would then be inheritable and
> granting access could also be delegated.

I agree that predefined roles are better than additional role
attributes.  I don't agree that predefined roles are better than GRANT
EXECUTE ON FUNCTION pg_catalog.blah().  I think full support for GRANT
EXECUTE ON FUNCTION pg_catalog.blah() is undeniably useful and will be
a very clear improvement over what we have now.  I think predefined
roles are a reasonable thing for somebody to want, but I don't think
it's nearly as clear-cut, and I'm very much unconvinced that we know
that the ones you're proposing are the right ones.

>> 3. It was clear from the outset that the replication role's basic
>> purpose was to be sufficient privilege for a streaming standby and no
>> more.  The remit of these roles is a lot less clear, at least to me.
>
> I've certainly intended the intention of these roles to be clear and
> documented.  The point I was trying to address above is that we
> certainly appear fine to add additional privileges as new capabilities
> are added to existing role attributes (the entire logical replication
> system was added after the replication role attribute).

Mmmph.  I think we got lucky there as much as anything.  replication
already allows sufficient privilege to extract all data in the
database, so allowing it to also request a logical change stream isn't
really a privilege escalation.  That's not going to be true for what
you are proposing here.

>> > Adding pg_dump dump and restore support for catalog ACLs implies that
>> > we're supporting ACLs on all catalog objects- including tables.
>>
>> Not to me it doesn't.  I think we could support it just for functions,
>> and have it continue to be as weird as it is currently for other types
>> of objects until somebody gets around to straightening that out.  If
>> we want to support it for more object types, great, but I don't think
>> that's a hard requirement.
>
> Alright, that would certainly simplify things if we're talking about
> only functions.  The only concern which I have there is if there are any
> non-function cases that we'll end up coming across, and I'm still a bit
> nervous about the "old pg_dump / new database" restore concern, but
> perhaps that's an acceptable issue.

I wouldn't worry about that a bit.  We recommend that users always
pg_dump using the newest version of pg_dump, even if the dumped server
is older.  That advice will be entirely satisfactory in this case
also.

> Based on Noah's response (which I respond to below), we seem to still
> be debating the whole concept of default roles.  I'm happy to provide
> detailed analysis if we're able to agree on the concept.

I think you need to provide detailed analysis SO THAT we can agree on
the concept.

>> Oh, sure: they are not backup tools specifically.  But anything that
>> might need elevated privileges deserves consideration here: what sort
>> of subdivision of the superuser role would make that need go away?
>
> The general approach which I've been using for the default roles is that
> they should grant rights which aren't able to be used to trivially make
> oneself a superuser.

That's a good principle, but not a sufficient guide.

>> Stop.  Even if PostgreSQL introduces pg_monitor, check_postgres will do 
>> itself
>> a favor by not using it.  The moment the two projects' sense of monitoring
>> privileges falls out of lockstep, benefits from using pg_monitor go negative.
>> check_postgres should instead furnish a script that creates a role holding
>> required privileges and/or SECURITY DEFINER helpers.  If check_postgres 
>> starts
>> using an object, say pgstattuple, it will wish to use it in all versions.
>> Since PostgreSQL will change pg_monitor privileges in major releases only,
>> 

Re: [HACKERS]WIP: Covering + unique indexes.

2016-01-06 Thread Jeff Janes
On Tue, Jan 5, 2016 at 11:55 PM, David Rowley
 wrote:
> On 4 January 2016 at 21:49, David Rowley 
> wrote:
>>
>> I've not tested the patch yet. I will send another email soon with the
>> results of that.
>
>
> Hi,
>
> As promised I've done some testing on this, and I've found something which
> is not quite right:
>
> create table ab (a int,b int);
> insert into ab select x,y from generate_series(1,20) x(x),
> generate_series(10,1,-1) y(y);
> create index on ab (a) including (b);
> explain select * from ab order by a,b;
> QUERY PLAN
> --
>  Sort  (cost=10.64..11.14 rows=200 width=8)
>Sort Key: a, b
>->  Seq Scan on ab  (cost=0.00..3.00 rows=200 width=8)
> (3 rows)

If you set enable_sort=off, then you get the index-only scan with no
sort.  So it believes the index can be used for ordering (correctly, I
think), just sometimes it thinks it is not faster to do it that way.

I'm not sure why this would be a correctness problem.  The covered
column does not participate in uniqueness checks, but it still usually
participates in index ordering.  (That is why dummy op-classes are
needed if you want to include non-sortable-type columns as being
covered.)

>
> This is what I'd expect
>
> truncate table ab;
> insert into ab select x,y from generate_series(1,20) x(x),
> generate_series(10,1,-1) y(y);
> explain select * from ab order by a,b;
>   QUERY PLAN
> --
>  Index Only Scan using ab_a_b_idx on ab  (cost=0.15..66.87 rows=2260
> width=8)
> (1 row)
>
> This index, as we've defined it should not be able to satisfy the query's
> order by, although it does give correct results, that's because the index
> seems to be built wrongly in cases where the rows are added after the index
> exists.

I think this just causes differences in planner statistics leading to
different plans.  ANALYZE the table and it goes back to doing the
sort, because it thinks the sort is faster.

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [DOCS] Description tweak for vacuumdb

2016-01-06 Thread Robert Haas
On Sun, Jan 3, 2016 at 9:05 PM, Ian Barwick  wrote:
> Like the docs say, vacuumdb is a "wrapper around the SQL command VACUUM"
> which I used to use in dim-and-distant pre-autovacuum days came for cronjobs,
> but until messing around with pg_upgrade recently I hadn't really had much
> use for it. Anyway, the docs also say "There is no effective difference
> between vacuuming and analyzing databases via this utility and via other
> methods for accessing the server", which IMHO seems a bit out-of-date as
> it now does two things which you can't do directly via e.g. psql:
>
> - generate statistics in stages (9.4)
> - parallel vacuum (9.5)
>
> Attached patches (for 9.4 and 9.5/HEAD) update the description to make
> clear that it now does a bit more than just execute a single command.

So I think this is a good change, but...

+   server. However it does provide additional functionality for generating
+   statistics in stages, which is useful when optimizing a newly populated
+   database, and for executing vacuum or analyze commands in parallel.

...it's not entirely clear to which portion of the sentence the clause
at the end applies.  Is executing vacuum or analyze commands in
parallel another case in which generating statistics in stages is
helpful, or is it another thing altogether that vacuumdb does?  I know
the answer, and anyone who is reasonably clueful will too, but it
seems to me that we could probably tweak the phrasing here so that a
non-clueful person wouldn't go astray.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-06 Thread Jim Nasby

On 1/6/16 5:51 PM, Tom Lane wrote:

Having said that, it occurs to me that one way to contribute without
actually writing C code would be to try to drive those unfinished
discussions to consensus, and come up with specs for features that people
agree are well-thought-out.  Conversely, establishing a consensus that a
proposal is a bad idea and it should go away from the list would be a
useful activity.


+1.

Somewhat related to that, I don't believe there's any reason why commit 
fest managers need to be committers; it seems like the job is really 
just about reading through email activity to see where things are at.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-06 Thread Joshua D. Drake

On 01/06/2016 03:57 PM, Jim Nasby wrote:

On 1/6/16 5:51 PM, Tom Lane wrote:

Having said that, it occurs to me that one way to contribute without
actually writing C code would be to try to drive those unfinished
discussions to consensus, and come up with specs for features that people
agree are well-thought-out.  Conversely, establishing a consensus that a
proposal is a bad idea and it should go away from the list would be a
useful activity.


+1.

Somewhat related to that, I don't believe there's any reason why commit
fest managers need to be committers; it seems like the job is really
just about reading through email activity to see where things are at.


Agreed. That is a great way for people to contribute that aren't "hackers".

JD



--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing "I'm offended" is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add numeric_trim(numeric)

2016-01-06 Thread Tom Lane
Dean Rasheed  writes:
> On 6 January 2016 at 20:09, Robert Haas  wrote:
>> On Wed, Jan 6, 2016 at 10:21 AM, Dean Rasheed  
>> wrote:
>>> It seems like a useful function to have, but perhaps it should just be
>>> called trim() rather than numeric_trim(), for consistency with the
>>> names of the other numeric functions, which don't start with
>>> "numeric_".

>> That wouldn't work in this case, because we have hard-coded parser
>> productions for TRIM().

Does it have to be called TRIM()?  After looking at the spec for it
I'd think rtrim() is the more correct analogy.

Also worth noting is that those hard-wired parser productions aren't
as hard-wired as all that.

regression=# select trim(43.5);
ERROR:  function pg_catalog.btrim(numeric) does not exist

If we wanted to call the function btrim() underneath, this would
Just Work.  However, to alleviate confusion, it might be better
if we altered the grammar productions to output "trim" not "btrim"
for the not-LEADING-or-TRAILING cases, and of course renamed the
relevant string functions to match.

A different approach is that I'm not real sure why we want a function
that returns a modified numeric value at all.  To the extent I understood
Marko's original use case, it seems like what you'd invariably do with the
result is extract its scale().  Why not skip the middleman and define a
function named something like minscale() or leastscale(), which returns an
int that is the smallest scale that would not drop data?  (If you actually
did want the modified numeric value, you could use round(x, minscale(x))
to get it.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-06 Thread Greg Stark
On Wed, Jan 6, 2016 at 11:42 PM, Jim Nasby  wrote:
> Right. Personally, I feel the TODO has pretty much outlived it's usefulness.
> An issue tracker would make maintaining items like this a lot more
> reasonable, but it certainly wouldn't be free.

Eh, a bug tracker that tracks actual bugs would be useful, I don't
think anyone would argue with that. A vague "issue" tracker that just
collects ideas people have had that seemed like a good idea at some
time in history would suffer exactly the same problem the TODO has.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] looking for an array-extract-item-as-it operator

2016-01-06 Thread Tom Lane
Peter Krauss  writes:
> I need to access an array-item from an array of arrays.

Multi-dimensional arrays in Postgres are not "arrays of arrays".
If you persist in thinking they are, it's mostly going to lead you
astray about what will work or not.

Having said that, you might find that plpgsql's 
FOREACH ... SLICE ... IN ARRAY ... LOOP
construct would help you, if you need to iterate through the
rows sequentially.

http://www.postgresql.org/docs/9.4/static/plpgsql-control-structures.html#PLPGSQL-FOREACH-ARRAY

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] looking for an array-extract-item-as-it operator

2016-01-06 Thread Peter Krauss
I need to access an array-item from an array of arrays. Suppose


WITH t AS (SELECT '{{1,2,3},{33,44,55}}'::int[][] as a)SELECT
 a[2],-- returns null (!), why not works?
 a[2:2],  -- returns array-into-array, not a simple arrayFROM t;


There are a simple function or operator to acess it as it?

Summarizing: I am looking for a  f(a,2)  that returns  {33,44,55}, not
{{33,44,55}}.


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-06 Thread Tom Lane
Jim Nasby  writes:
> Somewhat related to that, I don't believe there's any reason why commit 
> fest managers need to be committers; it seems like the job is really 
> just about reading through email activity to see where things are at.

They don't need to be.  Michael Paquier did a fine job with the last CF.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-06 Thread Peter Geoghegan
On Wed, Jan 6, 2016 at 3:51 PM, Tom Lane  wrote:
> Yeah.  The other problem is that stuff that's actually small doesn't tend
> to hang around undone for long, so there's not really a broad array of
> stuff just waiting for someone to have a little time.  If we had a more
> actively maintained TODO list, it would largely contain (a) stuff that
> there's insufficient consensus on, and (b) stuff that's just big mean and
> nasty to implement.

I think that some friendly advice to less experienced contributors
about a good project for them to work on is enormously valuable, which
is why I try to do that whenever I can. Unfortunately, and perversely,
the TODO list is pretty far from that. Things go on the todo list
because they don't have a favorable cost/benefit ratio. I wouldn't
suggest a project to anyone that I would not be willing to work on
myself, which excludes most items on the TODO list.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improving replay of XLOG_BTREE_VACUUM records

2016-01-06 Thread Michael Paquier
On Thu, Jan 7, 2016 at 12:20 AM, Alvaro Herrera
 wrote:
> Vladimir Borodin wrote:
>
>> There are situations in which vacuuming big btree index causes stuck
>> in WAL replaying on hot standby servers for quite a long time. I’ve
>> described the problem in more details in this thread [0]. Below in
>> that thread Kevin Grittner proposed a good way for improving btree
>> scans so that btree vacuuming logic could be seriously simplified.
>> Since I don’t know when that may happen I’ve done a patch that makes
>> some improvement right now. If Kevin or someone else would expand [1]
>> for handling all types of btree scans, I suppose, my patch could be
>> thrown away and vacuuming logic should be strongly rewritten.
>
> You submitted this patch in May 2015 -- and 7 months later, Simon came
> up with another patch that's supposed to fix the underlying problem, so
> that this shouldn't be a problem anymore.
>
> Would you please have a look at Simon's patch, in particular verify
> whether it solves the performance dip in your testing environment?
> https://www.postgresql.org/message-id/CANP8%2BjJuyExr1HnTAdZraWsWkfc-octhug7YPtzPtJcYbyi4pA%40mail.gmail.com
> (Note there's an updated patch a few emails down the thread.)
>
> If it seems to fix the problem for you, I think we should mark yours
> rejected and just apply Simon's.

Simon's patch (justly) does not update lastBlockVacuumed in the case
of toast indexes, but Vladimir's patch would still optimize this case,
no?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-06 Thread Michael Paquier
On Thu, Jan 7, 2016 at 9:28 AM, Tom Lane  wrote:
> Jim Nasby  writes:
>> Somewhat related to that, I don't believe there's any reason why commit
>> fest managers need to be committers; it seems like the job is really
>> just about reading through email activity to see where things are at.
>
> They don't need to be.

More thoughts on that. I would even think the contrary, someone who
has just monitored for 1/2 years -hackers and knowing how things are
handled would be able to do this job as well, the only issue being to
juggle with many threads at the same time, to be sure that each patch
status is correct, and to not lose motivation during the CF. The last
one is the hardest by far.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_conversion seems rather strangely defined

2016-01-06 Thread Tom Lane
Noah Misch  writes:
> On Tue, Jan 05, 2016 at 01:46:51PM -0500, Tom Lane wrote:
>> I do not see a lot of point in the namespacing of encoding conversions
>> either.  Does anyone really need or use search-path-dependent lookup of
>> conversions?

> I have not issued CREATE CONVERSION except to experiment, and I have never
> worked in a database in which someone else had created one.  Among PGXN
> distributions, CREATE CONVERSION appears only in the pyrseas test suite.  It
> could be hard to track down testimony on real-world usage patterns, but I
> envision two credible patterns.  First, you could change the default search
> path to "corrected_conversions, pg_catalog, $user, public" and inject fixed
> versions of the system conversions.  One could use that to backport commit
> 8d3e090.  Second, you could add conversions we omit entirely, like UTF8 ->
> MULE_INTERNAL.  Dropping search-path-dependent lookup would remove the
> supported way to fix system conversions.

The built-in conversions are very intentionally not pinned.  So to my
mind, the supported way to replace one is to drop it and create your own.
The way you describe works only if an appropriate search path is installed
at the time a new session activates its client encoding conversions.  TBH,
I have no idea whether we apply (for example) "ALTER ROLE SET search_path"
before that happens; but even if we do, there's no real guarantee that it
wouldn't change.  We publish no documentation about the order of startup
actions.  Moreover past attempts at defining dependencies between GUC
settings have been spectacular failures, so I really don't want to go
there in this context.

It would only be important to be able to do it like that if different
users of the same database had conflicting ideas about what was the
correct conversion between client and database encodings.  I submit
that that's somewhere around epsilon probability, considering we have
not even heard of anyone replacing the system conversions at all.

(Adding conversions we don't supply is, of course, orthogonal to this.)

Moreover, we have precedent both for this approach being a bad idea
and for us changing it without many complaints.  We used to have
search-path-dependent lookup of default index operator classes.
We found out that that was a bad idea and got rid of it, cf commit
3ac1ac58c.  This situation looks much the same to me.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers

2016-01-06 Thread Amit Kapila
On Fri, Dec 25, 2015 at 6:36 AM, Robert Haas  wrote:

> On Wed, Dec 23, 2015 at 1:16 AM, Amit Kapila 
> wrote:
> >> >
> >> > Here procArrayGroupXid sounds like Xid at group level, how about
> >> > procArrayGroupMemberXid?
> >> > Find the patch with renamed variables for PGProc
> >> > (rename_pgproc_variables_v1.patch) attached with mail.
> >>
> >> I sort of hate to make these member names any longer, but I wonder if
> >> we should make it procArrayGroupClearXid etc.
> >
> > If we go by this suggestion, then the name will look like:
> > PGProc
> > {
> > ..
> > bool procArrayGroupClearXid, pg_atomic_uint32 procArrayGroupNextClearXid,
> > TransactionId procArrayGroupLatestXid;
> > ..
> >
> > PROC_HDR
> > {
> > ..
> > pg_atomic_uint32 procArrayGroupFirstClearXid;
> > ..
> > }
> >
> > I think whatever I sent in last patch were better.  It seems to me it is
> > better to add some comments before variable names, so that anybody
> > referring them can understand better and I have added comments in
> > attached patch rename_pgproc_variables_v2.patch to explain the same.
>
> Well, I don't know.  Anybody else have an opinion?
>
>
It seems that either people don't have any opinion on this matter or they
are okay with either of the naming conventions being discussed.  I think
specifying Member after procArrayGroup can help distinguishing which
variables are specific to the whole group and which are specific to a
particular member.  I think that will be helpful for other places as well
if we use this technique to improve performance.  Let me know what
you think about the same.

I have verified that previous patches can be applied cleanly and passes
make check-world.  To avoid confusion, I am attaching the latest
patches with this mail.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


rename_pgproc_variables_v2.patch
Description: Binary data


group_update_clog_v4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add numeric_trim(numeric)

2016-01-06 Thread Robert Haas
On Wed, Jan 6, 2016 at 6:51 PM, Dean Rasheed  wrote:
> On 6 January 2016 at 20:09, Robert Haas  wrote:
>> On Wed, Jan 6, 2016 at 10:21 AM, Dean Rasheed  
>> wrote:
>>> It seems like a useful function to have, but perhaps it should just be
>>> called trim() rather than numeric_trim(), for consistency with the
>>> names of the other numeric functions, which don't start with
>>> "numeric_".
>>
>> That wouldn't work in this case, because we have hard-coded parser
>> productions for TRIM().
>>
>
> Ah. Good point.
>
> Pity -- I would have liked a nice short name for this, in a similar
> vein to the existing numeric functions.

My experiences with function overloading haven't been enormously
positive - things that we think will work out sometimes don't, a la
the whole pg_size_pretty mess.

In this case, trim(stringy-thingy) and trim(numberish-thingy) aren't
even really doing the same thing, which makes me even less excited
about it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-06 Thread Jim Nasby

On 1/6/16 9:22 PM, Michael Paquier wrote:

On Thu, Jan 7, 2016 at 9:28 AM, Tom Lane  wrote:

Jim Nasby  writes:

Somewhat related to that, I don't believe there's any reason why commit
fest managers need to be committers; it seems like the job is really
just about reading through email activity to see where things are at.


They don't need to be.


More thoughts on that. I would even think the contrary, someone who
has just monitored for 1/2 years -hackers and knowing how things are
handled would be able to do this job as well, the only issue being to
juggle with many threads at the same time, to be sure that each patch
status is correct, and to not lose motivation during the CF. The last
one is the hardest by far.


Sorry, I inadvertently used 'committer' when I should have said 'coder'. 
There's nothing code related about CFM, and I think it's a dis-service 
to the community to have coders serving that role.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Comment typo in namespace.c

2016-01-06 Thread Amit Langote
On 2016/01/07 1:03, Tom Lane wrote:
> Amit Langote  writes:
>> On Wed, Jan 6, 2016 at 11:51 PM, Tom Lane  wrote:
>>> "non-existant" is flat wrong, so if we're going to fix typos, let's
>>> fix them to actually be English.
> 
>> So, non-existent? non-extant? I seems to me like an 's' accidentally
>> sneaked in when the author of the comment tried to write the latter
>> spelling. But the former sounds more familiar (at least to me).
> 
> "existent" is a word according to my dictionary, so "non-existent"
> is fine.  "extant" is also a word but it's less common and doesn't
> really mean the same thing --- it carries a connotation of "still
> in existence, surviving".  So you might say "Stonebraker's papers
> about Postgres from the '80s are still extant".  "Existent" just
> means "in existence" without any particular implication about time
> passing, so it's probably what is meant in most cases.
> 
> (Actually, in the particular context here, I guess "extant" would
> be sensible, but it would be rather hi-falutin' usage.  I'd go
> with "existent".)

Thanks for the explanation.

Regards,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-06 Thread Jim Nasby

On 1/6/16 6:18 PM, Greg Stark wrote:

On Wed, Jan 6, 2016 at 11:42 PM, Jim Nasby  wrote:

Right. Personally, I feel the TODO has pretty much outlived it's usefulness.
An issue tracker would make maintaining items like this a lot more
reasonable, but it certainly wouldn't be free.


Eh, a bug tracker that tracks actual bugs would be useful, I don't
think anyone would argue with that. A vague "issue" tracker that just
collects ideas people have had that seemed like a good idea at some
time in history would suffer exactly the same problem the TODO has.


I think one of the biggest hurdles people face is getting the community 
to agree that something is a desired feature. So if there was a list of 
things that the community had agreed would be good I think that itself 
would be useful. Even better if items had a rough outline of the work 
necessary.


Obviously that won't do too much for really big features. But if our 
experienced hackers focused less on coding and more on design and 
creating smaller tasks that people could work on, more people could 
potentially be put to work.


ISTM that the design work needs to be done and documented no matter 
what, so there shouldn't be much overhead there. The overhead would be 
in maintaining the tracker and making sure folks were actively getting 
stuff done. That can be done by a non-coder. That means it shouldn't 
really cost the community much in terms of current resources, as long as 
we attract new people to take on these new tasks.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-tenancy with RLS

2016-01-06 Thread Stephen Frost
Robert,

* Robert Haas (robertmh...@gmail.com) wrote:
> On Tue, Jan 5, 2016 at 11:07 PM, Haribabu Kommi
>  wrote:
> > May be you missed to apply the 3_shared_catalog_tenancy_v4 path,
> > because 4_database_catalog_tenancy_v5 patch depends on it.
> >
> > Here I attached all the patches for your convenience, I am able to
> > apply all patches in the order without any problem.
> 
> Is any committer thinking about taking a serious look at this patch series?

Joe took a look at it earlier and I'm definitely interested in it.
Furhter, he and I have chatted about it a few times.

Reference here to comments from Joe:
http://www.postgresql.org/message-id/55f1fb2e.8020...@joeconway.com

> I ask because (1) it seems like it could be nice to have but (2) it
> frightens me terribly.  We are generally very sparing about assuming
> that "stuff" (triggers, partial indexes, etc.) that works for user
> tables can also be made to work for system tables.  I haven't thought
> deeply about what might go wrong in this particular case, but it
> strikes me that if Haribabu Kommi is building something that is doomed
> for some reason, it would be good to figure that out before he spends
> any more time on it than he already has.

I'm not aware of any particular reason for it to be doomed out of the
gate.  That isn't to say that we won't find an issue with it or that
I've given it an in depth look (I've not), but the concept seems
reasonable enough and I can't think of any off-hand reasons why it
won't work.

I will note that there's a couple different patches involved here and
they really deserve their own review and consideration.

> Apart from the issue of whether this is doomed for some architectural
> reason, it is not entirely clear to me that there's any consensus that
> we want this.  I don't think that I understand the issues here well
> enough to proffer an opinion of my own just yet... but I'd like to
> hear what other people think.

I'm certainly of the opinion that we want this or something similar.

The big caveat kicking around in my head is if we want to have our own
set of defined policies or if we want to give flexibility to the
administrator to define their own policies.  In particular, I'm
wondering about things like:

CREATE POLICY namespace_limit ON pg_namespace TO company1 USING
  (substring(nspname,1,8) = 'company1_');

Which is a bit different, as I understand it, from what Haribadu has
been proposing and quite a bit more complicated, as we'd then have to
make the internal lookups respect the policy (so things like CREATE
SCHEMA would have to check if you're allowed to actually create that
schema, which would be based on the policy...).

I don't know if we want to try and support that level of flexibility
or if we'd like to simply go based on the concept of 'you only get to
see what you have access to', which I'm thinking will allow us to avoid
changing the existing functions as they are already doing permissions
checks.

My general thinking here is that we could support one set of provided
policies via these patches and then, if there is sufficient interest,
generalize how policies on system catalogs are handled and eventually
allow users to redefine the system catalog policies.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] pg_conversion seems rather strangely defined

2016-01-06 Thread Noah Misch
On Tue, Jan 05, 2016 at 01:46:51PM -0500, Tom Lane wrote:
> I do not see a lot of point in the namespacing of encoding conversions
> either.  Does anyone really need or use search-path-dependent lookup of
> conversions?

I have not issued CREATE CONVERSION except to experiment, and I have never
worked in a database in which someone else had created one.  Among PGXN
distributions, CREATE CONVERSION appears only in the pyrseas test suite.  It
could be hard to track down testimony on real-world usage patterns, but I
envision two credible patterns.  First, you could change the default search
path to "corrected_conversions, pg_catalog, $user, public" and inject fixed
versions of the system conversions.  One could use that to backport commit
8d3e090.  Second, you could add conversions we omit entirely, like UTF8 ->
MULE_INTERNAL.  Dropping search-path-dependent lookup would remove the
supported way to fix system conversions.

> (If they do, it's probably broken anyway, since for example
> we do not trouble to re-identify the client encoding conversion functions
> when search_path changes.)

That's bad in principle, but I'll guess it's tolerable in practice.  Switching
among implementations of a particular conversion might happen with O(weeks) or
longer period, like updating your system's iconv() conversion tables.  I can't
easily envision one application switching between implementations over the
course of a session.  (An application doing that today probably works around
the problem, perhaps with extra "SET client_encoding" calls.)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regression caused by recent change to initdb?

2016-01-06 Thread Amit Langote
On 2016/01/06 23:50, Tom Lane wrote:
> Amit Langote  writes:
>> On 2016/01/06 17:32, Amit Langote wrote:
>>> I stumbled upon a possibly strange behavior which may be related to recent
>>> initdb changes. For a freshly initdb'd cluster, the following looks fishy:
>>> ...
>>> These seem to be leftovers of activities of initdb.c's setup_description()
>>> and setup_collaction().
> 
>> I noticed these leftovers are not present in template1.
> 
> Ah, right: they get deleted from template1 correctly when the
> initdb-driven session shuts down.  But because of the merger into a single
> session, they're still there at the instant that we clone template1 into
> template0 and postgres databases, and there is nothing to remove them from
> there.
> 
> The minimum-change way to deal with it would be to explicitly DROP those
> tables when we're done with them.
> 
> A possibly slightly less fragile answer is to run two sessions, the
> second of which *only* processes the DB copying steps.
> 
> Neither of these answers seems all that clean to me...

Thanks for fixing this in initdb.

So in general, we have no way to get rid of the copies in a new database
of temp tables in template1 when the new database is created in the same
session as the one connected to template1. For example:

template1=# CREATE TEMP TABLE foo(a int);
CREATE TABLE

template1=# INSERT INTO foo VALUES (1);
INSERT 0 1

template1=# CREATE DATABASE test;
CREATE DATABASE

template1=# \c test
You are now connected to database "test" as user "amit".

test=# SELECT relname, relnamespace::regnamespace FROM pg_class
WHERE relnamespace != 'pg_catalog'::regnamespace
AND relnamespace != 'pg_toast'::regnamespace
AND relnamespace != 'information_schema'::regnamespace;
relname | relnamespace
-+--
foo | pg_temp_2

-- of course, there is no way to open it here (different backend id)
test=# SELECT * FROM foo;
ERROR:  relation "foo" does not exist
LINE 1: SELECT * FROM foo;

-- nor does it prevent from creating a new temp table foo.
test=# CREATE TEMP TABLE foo(a int);
CREATE TABLE

test=# SELECT relname, relnamespace::regnamespace FROM pg_class WHERE
relnamespace != 'pg_catalog'::regnamespace AND relnamespace !=
'pg_toast'::regnamespace AND relnamespace !=
'information_schema'::regnamespace;
 relname | relnamespace
-+--
 foo | pg_temp_2
 foo | pg_temp_3
(2 rows)

Maybe, we need not worry too much about this.

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Function and view to retrieve WAL receiver status

2016-01-06 Thread Haribabu Kommi
On Wed, Jan 6, 2016 at 8:00 PM, Michael Paquier
 wrote:
> On Wed, Jan 6, 2016 at 3:04 PM, Michael Paquier
>  wrote:
>> Attached is an updated patch.
>
> Forgot to update rules.out...

Thanks for the update. Patch looks good to me.
I marked it as ready for committer.

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-tenancy with RLS

2016-01-06 Thread Amit Langote
On 2016/01/06 13:07, Haribabu Kommi wrote:
> On Wed, Jan 6, 2016 at 1:43 PM, Amit Langote
>>
>> Patch 4_database_catalog_tenancy_v5 fails to apply:
> 
> May be you missed to apply the 3_shared_catalog_tenancy_v4 path,
> because 4_database_catalog_tenancy_v5 patch depends on it.

Oops, I even missed patches 1 and 2 at all.

> 
> Here I attached all the patches for your convenience, I am able to
> apply all patches in the order without any problem.

Okay, thanks!

I applied all the patches. I have a basic question. Sorry though if I've
entirely missed the point (and/or scope) of your proposal. I wonder if
something like the following should not have failed with the patch:

postgres=# CREATE POLICY class_policy ON pg_class TO PUBLIC USING
(relowner = current_user);
ERROR:  permission denied: "pg_class" is a system catalog

Is there no support yet for user-defined catalog policies?

Regards,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add numeric_trim(numeric)

2016-01-06 Thread Pavel Stehule
2016-01-07 1:11 GMT+01:00 Tom Lane :

> Dean Rasheed  writes:
> > On 6 January 2016 at 20:09, Robert Haas  wrote:
> >> On Wed, Jan 6, 2016 at 10:21 AM, Dean Rasheed 
> wrote:
> >>> It seems like a useful function to have, but perhaps it should just be
> >>> called trim() rather than numeric_trim(), for consistency with the
> >>> names of the other numeric functions, which don't start with
> >>> "numeric_".
>
> >> That wouldn't work in this case, because we have hard-coded parser
> >> productions for TRIM().
>
> Does it have to be called TRIM()?  After looking at the spec for it
> I'd think rtrim() is the more correct analogy.
>
> Also worth noting is that those hard-wired parser productions aren't
> as hard-wired as all that.
>
> regression=# select trim(43.5);
> ERROR:  function pg_catalog.btrim(numeric) does not exist
>
> If we wanted to call the function btrim() underneath, this would
> Just Work.  However, to alleviate confusion, it might be better
> if we altered the grammar productions to output "trim" not "btrim"
> for the not-LEADING-or-TRAILING cases, and of course renamed the
> relevant string functions to match.
>
> A different approach is that I'm not real sure why we want a function
> that returns a modified numeric value at all.  To the extent I understood
> Marko's original use case, it seems like what you'd invariably do with the
> result is extract its scale().  Why not skip the middleman and define a
> function named something like minscale() or leastscale(), which returns an
> int that is the smallest scale that would not drop data?  (If you actually
> did want the modified numeric value, you could use round(x, minscale(x))
> to get it.)
>

A example "round(x, minscale(x))" looks nice, but there can be a
performance issues - you have to unpack varlena 2x

I'll try to some performance tests

Regards

Pavel


> regards, tom lane
>


Re: [HACKERS] extend pgbench expressions with functions

2016-01-06 Thread Michael Paquier
On Tue, Dec 22, 2015 at 12:16 AM, Fabien COELHO  wrote:
> So you would just like to remove double double(int) and double sqrt(double)
> from the patch, and basically that is all? int int(double)??
> debug??? (hmmm, useful debugging a non trivial expression).

OK, so I am finally back on this item, and after a close look at the
code I am throwing away my concerns.

-   if (!evaluateExpr(st, expr, ))
+   if (!evaluateExpr(thread, st, expr, ))
{
st->ecnt++;
return true;
}
-   sprintf(res, INT64_FORMAT, result);
+   sprintf(res, INT64_FORMAT, INT(result));
Based on that all the final results of a \set command will have an
integer format, still after going through this patch, allowing double
as return type for nested function calls (first time "nested" is
written on this thread) is actually really useful, and that's what
makes sense for this feature. I am not sure why I haven't thought
about that before as well... So, even if for example the final result
of a variable is an integer, it is possible to do very fancy things
like that:
\set aid debug(random_exponential(1, 100, pi()))
\set bid debug(random_exponential(101, 200, pi()))
\set cid debug(random_gaussian(:aid, :bid, double(:aid * pi(
SELECT :cid;

That's actually where things like sqrt() and pi() gain a lot in power
by working directly on the integers returned by the random functions.

Something that bothered me though while testing: the debug() function
is useful, but it becomes difficult to see its results efficiently
when many outputs are stacking, so I think that it would be useful to
be able to pass a string prefix to have the possibility to annotate a
debug entry, say "debug('foo', 5.2)" outputs:
debug(script=0,command=1): note: foo, int/double X
I guess that it would be useful then to allow as well concatenation of
strings using "+" with a new PgBenchExprType as ENODE_STRING, but
perhaps I am asking too much. Thoughts are welcome regarding that, it
does not seem mandatory as of now as this patch is already doing much.
We could remove some of the functions in the first shot of this patch
to simplify it a bit, but that does not look like a win as their
footprint on the code is low.

I haven't noticed at quick glance any kind of memory leaks but this
deserves a closer look with valgrind for example, still the patch
looks in good shape to me. And more comments for example in pgbench.h
would be welcome to explain more the code. I am fine to take a final
look at that before handling it to a committer though. Does that sound
fine as a plan, Fabien?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-tenancy with RLS

2016-01-06 Thread Haribabu Kommi
On Thu, Jan 7, 2016 at 2:29 PM, Stephen Frost  wrote:
> Robert,
>
> * Robert Haas (robertmh...@gmail.com) wrote:
>
>> Apart from the issue of whether this is doomed for some architectural
>> reason, it is not entirely clear to me that there's any consensus that
>> we want this.  I don't think that I understand the issues here well
>> enough to proffer an opinion of my own just yet... but I'd like to
>> hear what other people think.
>
> I'm certainly of the opinion that we want this or something similar.
>
> The big caveat kicking around in my head is if we want to have our own
> set of defined policies or if we want to give flexibility to the
> administrator to define their own policies.  In particular, I'm
> wondering about things like:
>
> CREATE POLICY namespace_limit ON pg_namespace TO company1 USING
>   (substring(nspname,1,8) = 'company1_');
>
> Which is a bit different, as I understand it, from what Haribadu has
> been proposing and quite a bit more complicated, as we'd then have to
> make the internal lookups respect the policy (so things like CREATE
> SCHEMA would have to check if you're allowed to actually create that
> schema, which would be based on the policy...).

I feel we may needed both our own set of policies and also providing
the user to create/alter/drop the catalog policies. This way we can
support both simple and complex scenarios. With default policies
an user can setup multi-tenancy easily. With the help of edit option,
user can tune the policies according to their scenarios.

The one problem with either approach as i am thinking, currently with
our own set of policies, the objects entries that are present on the
catalog tables are visible to the users, those are having any kind of
privileges on those objects. In case if a user tries to create an object
that is already present in the catalog relation will produce an error, but
user cannot view that object because of permissions problem.

To avoid such problem, administrator has to add policies such as
"namespace_prefix" needs to be added to all catalog tables.

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pglogical_output - a general purpose logical decoding output plugin

2016-01-06 Thread Craig Ringer
On 7 January 2016 at 01:17, Peter Eisentraut  wrote:

> On 12/22/15 4:55 AM, Craig Ringer wrote:
> > I'm a touch frustrated by that, as a large part of the point of
> > submitting the output plugin separately and in advance of the downstream
> > was to get attention for it separately, as its own entity. A lot of
> > effort has been put into making this usable for more than just a data
> > source for pglogical's replication tools.
>
> I can't imagine that there is a lot of interest in a replication tool
> where you only get one side of it, no matter how well-designed or
> general it is.


Well, the other part was posted most of a week ago.

http://www.postgresql.org/message-id/5685bb86.5010...@2ndquadrant.com

... but this isn't just about replication. At least, not just to another
PostgreSQL instance. This plugin is designed to be general enough to use
for replication to other DBMSes (via appropriate receivers), to replace
trigger-based data collection in existing replication systems, for use in
audit data collection, etc.

Want to get a stream of data out of PostgreSQL in a consistent, simple way,
without having to add triggers or otherwise interfere with the origin
database? That's the purpose of this plugin, and it doesn't care in the
slightest what the receiver wants to do with that data. It's been designed
to be usable separately from pglogical downstream and - before the Python
tests were rejected in discussions on this list - was tested using a test
suite completely separate to the pglogical downstream using psycopg2 to
make sure no unintended interdependencies got introduced.

You can do way more than that with the output plugin but you have to write
your own downstream/receiver for the desired purpose, since using a
downstream based on bgworkers and SPI won't make any sense outside
PostgreSQL.

If you just want a canned product to use, see the pglogical post above for
the downstream code.



> Ultimately, what people will want to do with this is
> replicate things, not muse about its design aspects. So if we're going

to ship a replication solution in PostgreSQL core, we should ship all
> the pieces that make the whole system work.
>

I don't buy that argument. Doesn't that mean logical decoding shouldn't
have been accepted? Or the initial patches for parallel query? Or any
number of other things that're part of incremental development solutions?

(This also seems to contradict what you then argue below, that the proposed
feature is too broad and does too much.)

I'd be happy to see both parts go in, but I'm frustrated that nobody's
willing to see beyond "replicate from one Pg to another Pg" and see all the
other things you can do. Want to replicate to Oracle / MS-SQL / etc? This
will help a lot and solve a significant part of the problem for you. Want
to stream data to append-only audit logs? Ditto. But nope, it's all about
PostgreSQL to PostgreSQL.

Please try to look further into what client applications can do with this
directly. I already know it meets the needs of the pglogical downstream.
What I was hoping to achieve with posting the output plugin earlier was to
get some thought going about what *else* it'd be good for.

Again: pglogical is posted now (it just took longer than expected to get
ready) and I'll be happy to see both it and the output plugin included. I
just urge people to look at the output plugin as more than a tightly
coupled component of pglogical.

Maybe some quality name bikeshedding for the output plugin would help ;)

Also, I think there are two kinds of general systems: common core, and
> all possible features.  A common core approach could probably be made
> acceptable with the argument that anyone will probably want to do things
> this way, so we might as well implement it once and give it to people.
>

That's what we're going for here. Extensible, something people can build on
and use.


> In a way, the logical decoding interface is the common core, as we
> currently understand it.  But this submission clearly has a lot of
> features beyond just the basics


Really? What would you cut? What's beyond the basics here? What basics are
you thinking of, i.e what set of requirements are you working towards /
needs are you seeking to meet?

We cut this to the bone to produce a minimum viable logical replication
solution. Especially the output plugin.

Cut the hook interfaces for row and xact filtering? You lose the ability to
use replication origins, crippling functionality, and for no real gain in
simplicity.

Remove JSON support? That's what most people are actually likely to want to
use when using the output plugin directly, and it's important for
debugging/tracing/diagnostics. It's a separate feature, to be sure, but
it's also a pretty trivial addition.


> and we could probably go through them
> one by one and ask, why do we need this bit?  So that kind of system
> will be very hard to review as a standalone submission.
>

Again, I disagree. I think 

Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-06 Thread Jim Nasby

On 1/6/16 2:18 PM, Robert Haas wrote:

On Wed, Jan 6, 2016 at 10:43 AM, Catalin Iacob  wrote:

I also think a list of small things suitable for new contributors
would help attracting them. Not all would stick and go on to larger
items but hopefully at least some would.


I agree with this.  Curating such a list is a fair amount of work that
somebody's got to do, though.  The TODO list is full of an awful lot
of things that don't matter and missing an awful lot of things that
do.


Right. Personally, I feel the TODO has pretty much outlived it's 
usefulness. An issue tracker would make maintaining items like this a 
lot more reasonable, but it certainly wouldn't be free.


Something else to consider though: I sent one email and the task was 
done in 24 hours. For things that can be well defined and are fairly 
mechanical, I suspect an email to hackers with a big [NEW HACKER] banner 
would do wonders.


Related to that is JD's offer to donate staff time to infrastructure 
work. There's probably a fair amount of things that could be "farmed 
out" that way. Some folks in the community proper would still need to 
keep tabs on things, but they wouldn't have to do the gruntwork. If, 
say, the Ops teams at 2nd Quadrant, CMD, and EDB wanted to work together 
on improving infrastructure, that's pretty much community at that point, 
and not a dependence on a single external entity.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-06 Thread Tom Lane
Robert Haas  writes:
> On Wed, Jan 6, 2016 at 10:43 AM, Catalin Iacob  wrote:
>> I also think a list of small things suitable for new contributors
>> would help attracting them. Not all would stick and go on to larger
>> items but hopefully at least some would.

> I agree with this.  Curating such a list is a fair amount of work that
> somebody's got to do, though.  The TODO list is full of an awful lot
> of things that don't matter and missing an awful lot of things that
> do.

Yeah.  The other problem is that stuff that's actually small doesn't tend
to hang around undone for long, so there's not really a broad array of
stuff just waiting for someone to have a little time.  If we had a more
actively maintained TODO list, it would largely contain (a) stuff that
there's insufficient consensus on, and (b) stuff that's just big mean and
nasty to implement.

Having said that, it occurs to me that one way to contribute without
actually writing C code would be to try to drive those unfinished
discussions to consensus, and come up with specs for features that people
agree are well-thought-out.  Conversely, establishing a consensus that a
proposal is a bad idea and it should go away from the list would be a
useful activity.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add numeric_trim(numeric)

2016-01-06 Thread Dean Rasheed
On 6 January 2016 at 20:09, Robert Haas  wrote:
> On Wed, Jan 6, 2016 at 10:21 AM, Dean Rasheed  
> wrote:
>> It seems like a useful function to have, but perhaps it should just be
>> called trim() rather than numeric_trim(), for consistency with the
>> names of the other numeric functions, which don't start with
>> "numeric_".
>
> That wouldn't work in this case, because we have hard-coded parser
> productions for TRIM().
>

Ah. Good point.

Pity -- I would have liked a nice short name for this, in a similar
vein to the existing numeric functions.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] checkpointer continuous flushing

2016-01-06 Thread Tomas Vondra

Hi,

On 12/16/2015 08:27 PM, Fabien COELHO wrote:


Hello Tomas,


I'm planning to do some thorough benchmarking of the patches proposed
in this thread, on various types of hardware (10k SAS drives and
SSDs). But is that actually needed? I see Andres did some testing, as
he posted summary of the results on 11/12, but I don't see any actual
results or even info about what benchmarks were done (pgbench?).

If yes, do we only want to compare 0001-ckpt-14-andres.patch against
master, or do we need to test one of the previous Fabien's patches?


My 0.02€,

Although I disagree with some aspects of Andres patch, I'm not a
committer and I'm tired of arguing. I'm just planing to do minor changes
to Andres version to fix a potential issue if the file is closed which
flushing is in progress, but that will not change the overall shape of it.

So testing on Andres version seems relevant to me.


The patch no longer applies to master. Can someone rebase it?

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On columnar storage (2)

2016-01-06 Thread Robert Haas
On Mon, Dec 28, 2015 at 2:15 PM, Alvaro Herrera
 wrote:
>> 1. CS API.
>> I agree with you that FDW API seems to be not enough to efficiently support
>> work with CS.
>> At least we need batch insert.
>> But may be it is better to extend FDW API rather than creating special API
>> for CS?
>
> The patch we have proposed thus far does not mess with executor
> structure too much, so probably it would be possible to add some things
> here and there to the FDW API and it might work.  But in the long term I
> think the columnar storage project is more ambitious; for instance, I'm
> sure we will want to be able to vectorise certain operations, and the
> FDW API will become a bottleneck, so to speak.  I'm thinking in
> vectorisation in two different ways: one is that some operations such as
> computing aggregates over large data sets can work a lot faster if you
> feed the value of one column for multiple tuples at a time in columnar
> format; that way you can execute the operation directly in the CPU
> (this requires specific support from the aggregate functions.)
> For this to work, the executor needs to be rejigged so that multiple
> values (tuples) can be passed at once.
>
> The other aspect of vectorisation is that one input tuple might have
> been split in several data origins, so that one half of the tuple is in
> columnar format and another format is in row format; that lets you do
> very fast updates on the row-formatted part, while allowing fast reads
> for the columnar format, for instance.  (It's well known that columnar
> oriented storage does not go well with updates; some implementation even
> disallow updates and deletes altogether.)  Currently within the executor
> a tuple is a TupleTableSlot which contains one Datum array, which has
> all the values coming out of the HeapTuple; but for split storage
> tuples, we will need to have a TupleTableSlot that has multiple "Datum
> arrays" (in a way --- because, actually, once we get to vectorise as in
> the preceding paragraph, we no longer have a Datum array, but some more
> complex representation).
>
> I think that trying to make the FDW API address all these concerns,
> while at the same time *also* serving the needs of external data
> sources, insanity will ensue.

I think the opposite.  Suppose we add vectorization support (or
whatever other feature, could be asynchronous execution or
faster-than-light travel or whatever) to the executor.  Well, are we
going to say that FDWs can't get access to that feature?  I think that
would be an extremely surprising decision.  Presumably, if we add cool
capabilities to the executor, we want FDWs to be able to get access to
those new capabilities just as built-in tables can.  So, we'll
probably think about what new FDW methods - optional methods, probably
- would be needed to expose the new capabilities and add them.

Now, there may still be some reason why it doesn't make sense to have
the columnar store stuff go through the FDW API.  It's sorta doing
something different.  If you tilt your head right, a table with a
columnar store smells a lot like two tables that will frequently need
to be joined; and if we were to implement it that way, then one of
those tables would just be a table, and the other one would be a
"foreign table" that actually has backing storage.

If we don't do it that way, then I'm curious what my mental model for
this feature should be.  We don't have any concept currently of an
"incomplete tuple" that includes only a subset of the columns.  Some
of the columns can be TOAST pointers that have to be expanded before
use, but they can't be left out altogether...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add numeric_trim(numeric)

2016-01-06 Thread Robert Haas
On Wed, Jan 6, 2016 at 10:21 AM, Dean Rasheed  wrote:
> It seems like a useful function to have, but perhaps it should just be
> called trim() rather than numeric_trim(), for consistency with the
> names of the other numeric functions, which don't start with
> "numeric_".

That wouldn't work in this case, because we have hard-coded parser
productions for TRIM().

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] checkpointer continuous flushing

2016-01-06 Thread Andres Freund
On 2016-01-06 21:01:47 +0100, Tomas Vondra wrote:
> >Although I disagree with some aspects of Andres patch, I'm not a
> >committer and I'm tired of arguing. I'm just planing to do minor changes
> >to Andres version to fix a potential issue if the file is closed which
> >flushing is in progress, but that will not change the overall shape of it.

Are you working on that aspect?

> >So testing on Andres version seems relevant to me.
> 
> The patch no longer applies to master. Can someone rebase it?

I'm working on an updated version, trying to mitigate the performance
regressions I observed.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-tenancy with RLS

2016-01-06 Thread Robert Haas
On Tue, Jan 5, 2016 at 11:07 PM, Haribabu Kommi
 wrote:
> May be you missed to apply the 3_shared_catalog_tenancy_v4 path,
> because 4_database_catalog_tenancy_v5 patch depends on it.
>
> Here I attached all the patches for your convenience, I am able to
> apply all patches in the order without any problem.

Is any committer thinking about taking a serious look at this patch series?

I ask because (1) it seems like it could be nice to have but (2) it
frightens me terribly.  We are generally very sparing about assuming
that "stuff" (triggers, partial indexes, etc.) that works for user
tables can also be made to work for system tables.  I haven't thought
deeply about what might go wrong in this particular case, but it
strikes me that if Haribabu Kommi is building something that is doomed
for some reason, it would be good to figure that out before he spends
any more time on it than he already has.

Apart from the issue of whether this is doomed for some architectural
reason, it is not entirely clear to me that there's any consensus that
we want this.  I don't think that I understand the issues here well
enough to proffer an opinion of my own just yet... but I'd like to
hear what other people think.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No Issue Tracker - Say it Ain't So!

2016-01-06 Thread Robert Haas
On Wed, Jan 6, 2016 at 10:43 AM, Catalin Iacob  wrote:
> I also think a list of small things suitable for new contributors
> would help attracting them. Not all would stick and go on to larger
> items but hopefully at least some would.

I agree with this.  Curating such a list is a fair amount of work that
somebody's got to do, though.  The TODO list is full of an awful lot
of things that don't matter and missing an awful lot of things that
do.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Relation extension scalability

2016-01-06 Thread Robert Haas
On Thu, Dec 31, 2015 at 6:22 AM, Dilip Kumar  wrote:

> On Fri, Dec 18, 2015 at 10:51 AM, Dilip Kumar 
> wrote:
>
> On Sun, Jul 19 2015 9:37 PM Andres Wrote,
>>
>> > The situation the read() protect us against is that two backends try to
>> > extend to the same block, but after one of them succeeded the buffer is
>> > written out and reused for an independent page. So there is no in-memory
>> > state telling the slower backend that that page has already been used.
>>
>> I was looking into this patch, and done some performance testing..
>>
>> Currently i have done testing my my local machine, later i will perform
>> on big machine once i get access to that.
>>
>> Just wanted to share the current result what i get i my local machine
>> Machine conf (Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz, 8 core and 16GM
>> of RAM).
>>
>> Test Script:
>> ./psql -d postgres -c "COPY (select g.i::text FROM generate_series(1,
>> 1) g(i)) TO '/tmp/copybinarywide' WITH BINARY";
>>
>> ./psql -d postgres -c "truncate table data"
>> ./psql -d postgres -c "checkpoint"
>> ./pgbench -f copy_script -T 120 -c$ -j$ postgres
>>
>
> This time i have done some testing on big machine with* 64 physical cores
> @ 2.13GHz and 50GB of RAM*
>
> There is performance comparison of base, extend without
> RelationExtensionLock patch given by Andres and
> multi-extend patch (this will extend the multiple blocks at a time based
> on a configuration parameter.)
>
>
> *Problem Analysis:*
> 1. With base code when i try to observe the problem using perf and other
> method (gdb), i found that RelationExtensionLock is main bottleneck.
> 2. Then after using RelationExtensionLock Free patch, i observed now
> contention is FileWrite (All backends are trying to extend the file.)
>
>
> *Performance Summary and
> Analysis:*
> 1. In my performance results Multi Extend shown best performance and
> scalability.
> 2. I think by extending in multiple blocks we solves both the
> problem(Extension Lock and Parallel File Write).
> 3. After extending one Block immediately adding to FSM so in most of the
> cases other backend can directly find it without taking extension lock.
>
> Currently the patch is in initial stage, i have done only test performance
> and pass the regress test suit.
>
>
>
> *Open problems -*
> 1. After extending the page we are adding it directly to FSM, so if vacuum
> find this page as new it will give WARNING.
> 2. In RelationGetBufferForTuple, when PageIsNew, we are doing PageInit,
> same need to be consider for index cases.
>
>
>
> *Test Script:-*
> ./psql -d postgres -c "COPY (select g.i::text FROM generate_series(1,
> 1) g(i)) TO '/tmp/copybinarywide' WITH BINARY";
>
> ./psql -d postgres -c "truncate table data"
> ./psql -d postgres -c "checkpoint"
> ./pgbench -f copy_script -T 120 -c$ -j$ postgres
>
> *Performanec Data:*
> --
> *There are Three code base for performance*
> 1. Base Code
>
> 2. Lock Free Patch : patch given in below thread
> *http://www.postgresql.org/message-id/20150719140746.gh25...@awork2.anarazel.de
> *
>
> 3. Multi extend patch attached in the mail.
> *#extend_num_pages : *This this new config parameter to tell how many
> extra page extend in case of normal extend..
> may be it will give more control to user if we make it relation property.
>
> I will work on the patch for this CF, so adding it to CF.
>
>
> *Shared Buffer 48 GB*
>
>
> *Clients* *Base (TPS)*
> *Lock Free Patch* *Multi-extend **extend_num_pages=5* 1 142 138 148 2 251
> 253 280 4 237 416 464 8 168 491 575 16 141 448 404 32 122 337 332
>
>
>
> *Shared Buffer 64 MB*
>
>
> *Clients* *Base (TPS)* *Multi-extend **extend_num_pages=5*
> 1 140 148
> 2 252 266
> 4 229 437
> 8 153 475
> 16 132 364
>
>
I'm not really sure what this email is trying to say.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company