Re: [HACKERS] How to inspect tuples during execution of a plan?

2016-10-11 Thread Ashutosh Bapat
On Mon, Oct 10, 2016 at 1:39 PM, Ernst-Georg Schmid
 wrote:
> Hello all,
>
> I'd like to inspect the content of tuples as they are sent during the
> execution of a query in order to react to their values.

The correct answer will depend upon the purpose of this inspection.
You may write a function, which can be invoked through a query
wrapping the original query OR you may add a new planner code (check
custom scans) which does the inspection. These are many ways to do
this with different degrees of invasive-ness. They may or may not
suite your purpose.

>
> I guess I could do it with a FDW, but that's a bit clumsy so I took a
> look at the hooks but have two questions:
>
> 1.) Would ExecutorRun_hook be the correct place to implement such an
> 'tuple inspector'?
> 2.) If yes, how? As far as I understand the source code, I would have
> to provide my own implementation based on standard_ExecutorRun and
> replace the ExecutePlan function with my own one that takes a look at
> each 'slot' like so
>
> if (estate->es_junkFilter != NULL)
>  1589 slot = ExecFilterJunk(estate->es_junkFilter, slot);
>  1590
>
> Tuple inspection here
>
>  1591 /*
>  1592  * If we are supposed to send the tuple somewhere, do so. (In
>  1593  * practice, this is probably always the case at this point.)
>  1594  */
>  1595 if (sendTuples)
>  1596 {
>
> If there is a better way, please advise. I'm really a newbie to this.

Either of those would do, if you want to write change the executor.

-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] "Some tests to cover hash_index"

2016-10-11 Thread Mithun Cy
On Wed, Sep 28, 2016 at 9:56 PM, Robert Haas  wrote:
> something committable will come from it, but with 2 days left it's not
> going to happen this CF.
Adding a new patch. This one uses generate series instead of INSERT INTO
SELECT and fixed comments from Alvaro.

-- 
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


commit-hash_coverage_test_v3_no_wal.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New SQL counter statistics view (pg_stat_sql)

2016-10-11 Thread vinayak



On 2016/10/12 12:21, Haribabu Kommi wrote:



On Thu, Sep 29, 2016 at 3:45 PM, Haribabu Kommi 
> wrote:




On Thu, Sep 22, 2016 at 3:05 AM, Alvaro Herrera
> wrote:

Peter Eisentraut wrote:

> How about having the tag not be a column name but a row
entry.  So you'd
> do something like
>
> SELECT * FROM pg_stat_sql WHERE tag = 'ALTER VIEW';
>
> That way, we don't have to keep updating (and re-debating)
this when new
> command types or subtypes are added.  And queries written
for future
> versions will not fail when run against old servers.

Yeah, good idea.


Yes, Having it as a row entry is good.

Let's also discuss the interface from the stats collector. 
Currently we

have some 20 new SQL functions, all alike, each loading the
whole data
and returning a single counter, and then the view invokes each
function
separately.  That doesn't seem great to me.  How about having
a single C
function that returns the whole thing as a SRF instead, and
the view is
just a single function invocation -- something like pg_lock_status
filling pg_locks in one go.

Another consideration is that the present patch lumps together
all ALTER
cases in a single counter.  This isn't great, but at the same
time we
don't want to bloat the stat files by having hundreds of
counters per
database, do we?


Currently, The SQL stats is a fixed size counter to track the all
the ALTER
cases as single counter. So while sending the stats from the
backend to
stats collector at the end of the transaction, the cost is same,
because of
it's fixed size. This approach adds overhead to send and read the
stats
is minimal.

With the following approach, I feel it is possible to support the
counter at
command tag level.

Add a Global and local Hash to keep track of the counters by using the
command tag as the key, this hash table increases dynamically whenever
a new type of SQL command gets executed. The Local Hash data is passed
to stats collector whenever the transaction gets committed.

The problem I am thinking is that, Sending data from Hash and
populating
the Hash from stats file for all the command tags adds some overhead.



I tried changing the pg_stat_sql into row mode and ran the regress 
suite to

add different type of SQL commands to the view and ran the pgbench test
on my laptop to find out any performance impact with this patch.

HEAD  PATCH
pgbench - select  828  816

Here I attached the pg_stat_sql patch to keep track of all SQL commands
based on the commandTag and their counts. I attached the result of this
view that is run on the database after "make installcheck" just for 
reference.



Thank you for the patch.

Test: Commands with uppercase and lowercase

If the tag='select' then it returns the 0 rows but count is actually 
increment by 1.


tag='select' vs tag='SELECT'

postgres=# SET track_sql TO ON;
SET
postgres=# SELECT * FROM pg_stat_sql where tag='SELECT';
  tag   | count
+---
 SELECT |12
(1 row)

postgres=# SELECT * FROM pg_stat_sql where tag=*'SELECT'*;
  tag   | count
+---
* SELECT |13*
(1 row)

postgres=# SELECT * FROM pg_stat_sql where tag=*'select'*;
 tag | count
-+---
*(0 rows)*

postgres=# SELECT * FROM pg_stat_sql where tag=*'SELECT'*;
  tag   | count
+---
* SELECT |15*
(1 row)

I think all command works same as above.

Regards,
Vinayak Pokale
NTT Open Source Software Center


Re: [HACKERS] autonomous transactions

2016-10-11 Thread Pavel Stehule
2016-10-11 21:54 GMT+02:00 Merlin Moncure :

> On Tue, Oct 11, 2016 at 10:06 AM, Petr Jelinek 
> wrote:
> > On 10/10/16 16:44, Merlin Moncure wrote:
> >> On Thu, Oct 6, 2016 at 3:53 PM, Simon Riggs 
> wrote:
> >>> On 6 October 2016 at 21:27, Robert Haas  wrote:
>  I think we should implement background transactions and call them
>  background transactions.  That allows us to expose additional
>  functionality which is useful, like the ability to kick something off
>  and check back later for the results.  There's no reason to call it
>  background transactions and also call it autonomous transactions: one
>  feature doesn't need two names.
> >>>
> >>> I'm happy to also invoke it via an alternate mechanism or API, so that
> >>> it can continue to be used even if the above mechanism changes.
> >>>
> >>> We have no need to wait for the perfect solution, even assuming we
> >>> would ever agree that just one exists.
> >>
> >> -1 on implementing both autonomous and background transactions.  This
> >> will confuse everyone.
> >
> > I personally care much more about having background transactions than
> > autonomous ones (as I only ever had use-cases for the background ones)
> > so don't agree there.
>
> All right.  But would you agree then that AT should at least emulate
> competing implementations? A major advantage of bgworkers is possibly
> supporting concurrent activity and maybe the syntax could be more
> directed to possibly moving in that direction other than copying
> oracle style (PRAGMA etc), particularly if the locking rules are
> substantially different.
>

There is a big trap - AT is usually used for writing to log tables. When BT
fails on maximum active workers then, then you cannot do any expected
operation.

Regards

Pavel


>
> merlin
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] Remove "Source Code" column from \df+ ?

2016-10-11 Thread Pavel Stehule
2016-10-12 1:51 GMT+02:00 Tom Lane :

> Stephen Frost  writes:
> > As was mentioned, this thread doesn't really need a patch but rather
> > some comment from those who have voiced a -1 on removing the PL source
> > code column.
>
> > In another, perhaps vain, attempt to get to a consensus, here's what it
> > looks like the current standings are for "Remove source from \df+",
>
> I think this is oversimplified, because there are multiple proposals on
> the table, and it's not entirely clear to me who approves of which.
> We have at least the following options:
>
> 1. Do nothing.
> 2. Remove the prosrc column from \df+ altogether.
> 3. Suppress prosrc for PL functions, but continue to show it for
>C and internal functions (and, probably, rename it to something
>other than "Source code" in that case).
> 4. #3 plus show PL function source code in footers.
>
> Personally I like #4 better than #3 better than #2 better than #1,
> but the only one I'm really against is "do nothing".
>

My preferences:  #2, #1 - I dislike #4 more than #1 - I don't see any
benefit there

Regards

Pavel


>
> > There have been a number of voices asking that we do *something* here.
>
> Yes.  I agree with your summary that Peter is the only one who appears
> to be in favor of "do nothing" (and even there, his complaint was at
> least partly procedural not substantive).
>
> regards, tom lane
>


Re: [HACKERS] New SQL counter statistics view (pg_stat_sql)

2016-10-11 Thread Haribabu Kommi
On Thu, Sep 29, 2016 at 3:45 PM, Haribabu Kommi 
wrote:

>
>
> On Thu, Sep 22, 2016 at 3:05 AM, Alvaro Herrera 
> wrote:
>
>> Peter Eisentraut wrote:
>>
>> > How about having the tag not be a column name but a row entry.  So you'd
>> > do something like
>> >
>> > SELECT * FROM pg_stat_sql WHERE tag = 'ALTER VIEW';
>> >
>> > That way, we don't have to keep updating (and re-debating) this when new
>> > command types or subtypes are added.  And queries written for future
>> > versions will not fail when run against old servers.
>>
>> Yeah, good idea.
>>
>
> Yes, Having it as a row entry is good.
>
>
>
>> Let's also discuss the interface from the stats collector.  Currently we
>> have some 20 new SQL functions, all alike, each loading the whole data
>> and returning a single counter, and then the view invokes each function
>> separately.  That doesn't seem great to me.  How about having a single C
>> function that returns the whole thing as a SRF instead, and the view is
>> just a single function invocation -- something like pg_lock_status
>> filling pg_locks in one go.
>>
>> Another consideration is that the present patch lumps together all ALTER
>> cases in a single counter.  This isn't great, but at the same time we
>> don't want to bloat the stat files by having hundreds of counters per
>> database, do we?
>
>
> Currently, The SQL stats is a fixed size counter to track the all the
> ALTER
> cases as single counter. So while sending the stats from the backend to
> stats collector at the end of the transaction, the cost is same, because of
> it's fixed size. This approach adds overhead to send and read the stats
> is minimal.
>
> With the following approach, I feel it is possible to support the counter
> at
> command tag level.
>
> Add a Global and local Hash to keep track of the counters by using the
> command tag as the key, this hash table increases dynamically whenever
> a new type of SQL command gets executed. The Local Hash data is passed
> to stats collector whenever the transaction gets committed.
>
> The problem I am thinking is that, Sending data from Hash and populating
> the Hash from stats file for all the command tags adds some overhead.
>


I tried changing the pg_stat_sql into row mode and ran the regress suite to
add different type of SQL commands to the view and ran the pgbench test
on my laptop to find out any performance impact with this patch.

HEAD  PATCH
pgbench - select  828  816

Here I attached the pg_stat_sql patch to keep track of all SQL commands
based on the commandTag and their counts. I attached the result of this
view that is run on the database after "make installcheck" just for
reference.

Regards,
Hari Babu
Fujitsu Australia
   tag| count 
--+---
 ALTER TABLE  |   505
 DROP COLLATION   | 2
 ALTER POLICY |13
 CREATE TABLESPACE| 2
 DROP POLICY  |12
 CREATE TYPE  |78
 CREATE MATERIALIZED VIEW |24
 ALTER INDEX  | 8
 CREATE TEXT SEARCH PARSER| 5
 SELECT   |  8509
 REINDEX  | 7
 SET  |   638
 SHOW |63
 CREATE AGGREGATE |70
 DROP FUNCTION|   180
 DROP OPERATOR|12
 DROP USER MAPPING|23
 COMMENT  |55
 GRANT ROLE   | 9
 ALTER FOREIGN TABLE  |54
 CREATE DOMAIN|59
 DROP TABLE   |   481
 DROP SERVER  | 9
 DROP DOMAIN  |38
 INSERT   |  2663
 DROP INDEX   |56
 ALTER SERVER |17
 CREATE VIEW  |   260
 CREATE TABLE |  1023
 CREATE TEXT SEARCH CONFIGURATION |11
 ROLLBACK |   166
 ALTER DATABASE   | 5
 DISCARD ALL  | 1
 CLOSE CURSOR ALL | 2
 DROP EVENT TRIGGER   |11
 SAVEPOINT|62
 ALTER OPERATOR CLASS | 9
 DROP RULE|26
 DROP LANGUAGE| 3
 CREATE OPERATOR FAMILY   |20
 ALTER USER MAPPING   |12
 DROP SCHEMA  |29
 LISTEN   | 2
 DROP TEXT SEARCH PARSER  | 2
 CREATE TEXT SEARCH TEMPLATE  | 5
 DEALLOCATE   | 7
 CREATE FUNCTION  |   512
 REVOKE   |68
 CREATE TABLE AS  |52
 DROP MATERIALIZED VIEW   | 4
 

Re: [HACKERS] DISTINCT with btree skip scan

2016-10-11 Thread Thomas Munro
On Sat, Nov 1, 2014 at 10:35 AM, Thomas Munro  wrote:
> I am definitely interested in collaborating on a series of patches to
> implement various kinds of skip-based plans as seen in other RDBMSs if
> others think it could be useful.  I see skip-based DISTINCT as a good
> place to start.  (I suspect the semi-related skip scan plan (for the
> case when your WHERE clause doesn't have a qual for the leading
> column(s)) is much harder to plan and execute and I also suspect it's
> less generally useful).

There's an interesting thread about merge joins with a skip
optimisation[1] ("leapfrog trie-join", though I haven't read the paper
yet).  That reminded me to go and rebase this experimental patch which
is somewhat related.  Here it is, now split into two parts: one patch
to add an amskip operation, and another to consider using it to
implement DISTINCT when it was already has an index only scan path on
an index that supports skipping.  As you can see it's still sketchy
experiment-grade code, but it now has a first attempt at costing.

postgres=# select count(a) from foo;
┌─┐
│  count  │
├─┤
│ 500 │
└─┘
(1 row)

Time: 493.501 ms
postgres=# select distinct a from foo;
┌───┐
│ a │
├───┤
│ 0 │
│ 1 │
│ 2 │
│ 3 │
│ 4 │
│ 5 │
│ 6 │
│ 7 │
│ 8 │
│ 9 │
└───┘
(10 rows)

Time: 0.516 ms
postgres=# explain select distinct a from foo;
┌─┐
│ QUERY PLAN
   │
├─┤
│ Index Only Scan for distinct values of leading 1 column(s) using
foo_pkey on foo  (cost=0.43..4.33 rows=10 width=4) │
└─┘
(1 row)

Time: 0.378 ms

[1] 
https://www.postgresql.org/message-id/flat/CAFQtOwp4e1hU6gKi1R_nHgOLTzUJko8EcaC1FdAV0rQFo1ARzA%40mail.gmail.com

-- 
Thomas Munro
http://www.enterprisedb.com


0001-indexam-skip-v3.patch
Description: Binary data


0002-distinct-skip-v3.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] int2vector and btree indexes

2016-10-11 Thread Amit Langote
On 2016/10/11 21:40, Tom Lane wrote:
> Amit Langote  writes:
>> I was wrong that the index *never* gets used.  It does in fact get used if
>> the operator is an ordering search operator (<, <=, >, >=), in which case
>> the query would use an array_ops operator (which is a btree operator class
>> for type anyarray) and hence matches the index operator family.  I failed
>> to mention in my original message that int2vector_ops is a hash operator
>> class.  There is exactly one =(int2vector, int2vector) operator in the
>> system of which there is no btree equivalent.
> 
> Hmm ... I kind of wonder why we have int2vectoreq or hashint2vector at
> all, likewise the hash opclass based on them.  The code says that they
> are needed to support catcache index columns, but the only columns of
> this type are
> 
> regression=# select attrelid::regclass,attname from pg_attribute where 
> atttypid = 'int2vector'::regtype;
>   attrelid  |  attname  
> +---
>  pg_index   | indkey
>  pg_index   | indoption
>  pg_trigger | tgattr
> (3 rows)
> 
> and those don't have indexes at all, let alone catcaches based on them.
> So it looks to me like we could remove this infrastructure.  There is
> value in being able to hash int2vectors during queries, for sure, but
> we could let that be done by the anyarray hash opclass.

Agreed.  So how about the attached patch to remove the said infrastructure?

> Having said that, int2vector is not meant as a user-facing type and so
> I don't particularly care whether indexes built on it work conveniently.
> But it looks to me like we've got some unnecessary code here.

Ah, I did wonder whether int2vector has been deprecated as a user-facing
type.  Anyway after applying the patch, it seems that the original
complaint I raised is no longer an issue (or so I think) - operators
applied to int2vector are always resolved to those accepting anyarray and
matched with anyarray_ops of the correct index access method.

Thanks,
Amit
diff --git a/src/backend/access/hash/hashfunc.c b/src/backend/access/hash/hashfunc.c
index 614f4ff..12dce2e 100644
--- a/src/backend/access/hash/hashfunc.c
+++ b/src/backend/access/hash/hashfunc.c
@@ -131,14 +131,6 @@ hashoidvector(PG_FUNCTION_ARGS)
 }
 
 Datum
-hashint2vector(PG_FUNCTION_ARGS)
-{
-	int2vector *key = (int2vector *) PG_GETARG_POINTER(0);
-
-	return hash_any((unsigned char *) key->values, key->dim1 * sizeof(int16));
-}
-
-Datum
 hashname(PG_FUNCTION_ARGS)
 {
 	char	   *key = NameStr(*PG_GETARG_NAME(0));
diff --git a/src/backend/utils/adt/int.c b/src/backend/utils/adt/int.c
index 29d92a7..443ac5c 100644
--- a/src/backend/utils/adt/int.c
+++ b/src/backend/utils/adt/int.c
@@ -254,22 +254,6 @@ int2vectorsend(PG_FUNCTION_ARGS)
 	return array_send(fcinfo);
 }
 
-/*
- * We don't have a complete set of int2vector support routines,
- * but we need int2vectoreq for catcache indexing.
- */
-Datum
-int2vectoreq(PG_FUNCTION_ARGS)
-{
-	int2vector *a = (int2vector *) PG_GETARG_POINTER(0);
-	int2vector *b = (int2vector *) PG_GETARG_POINTER(1);
-
-	if (a->dim1 != b->dim1)
-		PG_RETURN_BOOL(false);
-	PG_RETURN_BOOL(memcmp(a->values, b->values, a->dim1 * sizeof(int16)) == 0);
-}
-
-
 /*
  *	 PUBLIC ROUTINES		 *
  */
diff --git a/src/backend/utils/cache/catcache.c b/src/backend/utils/cache/catcache.c
index db7099f..6016d19 100644
--- a/src/backend/utils/cache/catcache.c
+++ b/src/backend/utils/cache/catcache.c
@@ -126,11 +126,6 @@ GetCCHashEqFuncs(Oid keytype, PGFunction *hashfunc, RegProcedure *eqfunc)
 
 			*eqfunc = F_INT2EQ;
 			break;
-		case INT2VECTOROID:
-			*hashfunc = hashint2vector;
-
-			*eqfunc = F_INT2VECTOREQ;
-			break;
 		case INT4OID:
 			*hashfunc = hashint4;
 
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index 491d4c9..725e2f2 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -283,7 +283,6 @@ extern Datum hashenum(PG_FUNCTION_ARGS);
 extern Datum hashfloat4(PG_FUNCTION_ARGS);
 extern Datum hashfloat8(PG_FUNCTION_ARGS);
 extern Datum hashoidvector(PG_FUNCTION_ARGS);
-extern Datum hashint2vector(PG_FUNCTION_ARGS);
 extern Datum hashname(PG_FUNCTION_ARGS);
 extern Datum hashtext(PG_FUNCTION_ARGS);
 extern Datum hashvarlena(PG_FUNCTION_ARGS);
diff --git a/src/include/catalog/pg_amop.h b/src/include/catalog/pg_amop.h
index 15b6290..e4c3515 100644
--- a/src/include/catalog/pg_amop.h
+++ b/src/include/catalog/pg_amop.h
@@ -573,8 +573,6 @@ DATA(insert (	2040   1114 1114 1 s 2060 405 0 ));
 DATA(insert (	   16 16 1 s   91 405 0 ));
 /* bytea_ops */
 DATA(insert (	2223   17 17 1 s 1955 405 0 ));
-/* int2vector_ops */
-DATA(insert (	2224   22 22 1 s	386 405 0 ));
 /* xid_ops */
 DATA(insert (	2225   28 28 1 s	352 405 0 ));
 /* cid_ops */
diff --git a/src/include/catalog/pg_amproc.h 

[HACKERS] buildfarm client release 4.18

2016-10-11 Thread Andrew Dunstan


I have just released buildfarm client version 4.18.


In addition to some minor fixes, there are two significant changes:


 * The client now makes a determined effort to clean up any left over
   build artefacts from previous runs at the start of a run. It also
   tries to clean away old socket files. That means that manual fixes
   are much less likely to be required, if the script has crashed or
   the machine has crashed or restarted.
 * There is a new config parameter "wait_timeout", which defaults to
   undefined. If it is set then any run that takes longer than the
   specified number of seconds is aborted. That means that runs that
   get stuck will time out, and the whole animal won't be stuck and
   requiring manual intervention. My test animal has a setting of 3 *
   3600, i.e. 3 hours. There is some possible danger in this, that we
   might miss bugs that cause us to get stuck, so at least some animals
   should not use this feature.


The release can be downloaded from: 




cheers


andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove "Source Code" column from \df+ ?

2016-10-11 Thread Tom Lane
Stephen Frost  writes:
> As was mentioned, this thread doesn't really need a patch but rather
> some comment from those who have voiced a -1 on removing the PL source
> code column.

> In another, perhaps vain, attempt to get to a consensus, here's what it
> looks like the current standings are for "Remove source from \df+",

I think this is oversimplified, because there are multiple proposals on
the table, and it's not entirely clear to me who approves of which.
We have at least the following options:

1. Do nothing.
2. Remove the prosrc column from \df+ altogether.
3. Suppress prosrc for PL functions, but continue to show it for
   C and internal functions (and, probably, rename it to something
   other than "Source code" in that case).
4. #3 plus show PL function source code in footers.

Personally I like #4 better than #3 better than #2 better than #1,
but the only one I'm really against is "do nothing".

> There have been a number of voices asking that we do *something* here.

Yes.  I agree with your summary that Peter is the only one who appears
to be in favor of "do nothing" (and even there, his complaint was at
least partly procedural not substantive).

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pgpassfile connection option

2016-10-11 Thread Oskari Saarenmaa

05.10.2016, 20:06, Robert Haas kirjoitti:

You could do something like that, I guess, but I think it might be a
good idea to wait and see if anyone else has opinions on (1) the
desirability of the basic feature, (2) the severity of the security
hazard it creates, and (3) your proposed remediation method.


I don't think it's appropriate to compare the proposed pgpassfile option 
to sslkey: the key is never transmitted over the network anywhere.



So far I don't see anybody actually endorsing your proposal here,
which might mean that any patch you produce will be rejected on the
grounds that nobody has a clear use case for this feature.

Hey, everybody: chime in here...


The original patch didn't come with a description of the problem it was 
aiming to solve, but I've wanted something perhaps a bit similar in the 
past.


The pghoard backup & restore suite we maintain needs to be able to spawn 
pg_basebackup and pg_receivexlog processes and in order to avoid passing 
passwords in command-line arguments visible to everyone who can get a 
process listing we currently have pghoard edit pgpass files.


Having to edit pgpass files at all is annoying and there isn't really a 
good way to do it: we can edit the one at $HOME and hope it doesn't 
clash with the expectations of the user running the software, write it 
to a custom file somewhere else - copying the password to a location the 
user might not expect - or create a new temporary file on every invocation.


I came across some bad advice about passing passwords to libpq today: 
there was a recommendation for setting a $PASSWORD environment variable 
and including that in the connection string, using shell to expand it. 
It got me thinking that maybe such a feature could be implemented in 
libpq: have it expand unquoted $variables; for example:


  $ PASSWORD=xyz psql 'password=$PASSWORD dbname=foo'

This does have the hazard of making it very easy to accidentally use 
double quotes instead of single quotes and have the shell expand the 
variable making it visible in process listing though.


/ Oskari


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Remove "Source Code" column from \df+ ?

2016-10-11 Thread Stephen Frost
All,

Starting a new thread with an accurate name to see if we can't get
somewhere with this topic.

* Pavel Stehule (pavel.steh...@gmail.com) wrote:
> 2016-10-08 23:46 GMT+02:00 Jim Nasby :
> > On 10/3/16 3:18 PM, Pavel Stehule wrote:
> >> I am feeling consensus on removing source of PL from \dt+. There is
> >> partial consensus on saving this field (renamed) for C and internal
> >> language. I am not sure about consensus about \sf enhancing.
> >
> > FWIW, I'm completely in favor of ditching PL source code. I'm neutral on C
> > and internal.
> 
> here is a patch

As was mentioned, this thread doesn't really need a patch but rather
some comment from those who have voiced a -1 on removing the PL source
code column.

In another, perhaps vain, attempt to get to a consensus, here's what it
looks like the current standings are for "Remove source from \df+", to
me:

  Peter:-1
  Robert:   -0
  Michael:  +0
  Alvaro:   +1
  Jim:  +1
  Pavel:+1
  Rushabh:  +1
  Stephen:  +1
  Tom:  +1

There have been a number of voices asking that we do *something* here.

In short, I believe Robert's willing to concede to the majority (see:
CA+TgmoaPCBUGF7yTcjmiU=m2sgo8jantnkhmtm1xkoar5uq...@mail.gmail.com), but
we have yet to hear if Peter's stance has changed on this since his July
posts (see: f16571cc-bf6f-53a1-6809-f09f48f0a...@2ndquadrant.com) and
that's a remaining full -1 vote.

Apologies if I got this wrong or mis-represented anyone, just trying to
drive towards a consensus on this, so we can move on.  Please speak up
if you feel this was an incorrect assessment of your position.

Full original thread is here:
https://www.postgresql.org/message-id/flat/CAB7nPqTR3Vu3xKOZOYqSm-%2BbSZV0kqgeGAXD6w5GLbkbfd5Q6w%40mail.gmail.com

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Macro customizable hashtable / bitmapscan & aggregation perf

2016-10-11 Thread Tomas Vondra

On 10/11/2016 05:56 PM, Andres Freund wrote:

On 2016-10-11 04:29:31 +0200, Tomas Vondra wrote:

On 10/11/2016 04:07 AM, Andres Freund wrote:

On 2016-10-10 17:46:22 -0700, Andres Freund wrote:

TPC-DS (tpcds.ods)
--

In this case, I'd say the results are less convincing. There are quite a few
queries that got slower by ~10%, which is well above - for example queries
22 and 67. There are of course queries that got ~10% faster, and in total
the patched version executed more queries (so overall the result is slightly
positive, but not significantly).


That's interesting. I wonder whether that's plan changes just due to the
changing memory estimates, or what's causing that. I'll look into it.


Hm. Based on an initial look those queries aren't planned with any of
the affected codepaths.  Could this primarily be a question of
randomness? Would it perhaps make sense to run the tests in a comparable
order? Looking at tpcds.py and the output files, it seems that the query
order differes between the runs, that can easily explain bigger
difference than the above. For me a scale=1 run creates a database of
approximately 4.5GB, thus with shared_buffers=1GB execution order is
likely to have a significant performance impact.



Yes, I see similar plans (no bitmap index scans or hash aggregates). But the
difference is there, even when running the query alone (so it's not merely
due to the randomized ordering).



I wonder whether this is again due to compiler moving stuff around.


It looks like that. I looked through a significant set of plans where
there we time differences (generated on my machine), and none of them
had bitmap or hash groupings to any significant degree.  Comparing
profiles in those cases usually shows a picture like:
24.98%   +0.32%  postgres[.] slot_deform_tuple
16.58%   -0.05%  postgres[.] ExecMakeFunctionResultNoSets
12.41%   -0.01%  postgres[.] slot_getattr
 6.10%   +0.39%  postgres[.] heap_getnext
 4.41%   -0.37%  postgres[.] ExecQual
 3.08%   +0.12%  postgres[.] ExecEvalScalarVarFast
 2.85%   -0.11%  postgres[.] check_stack_depth
 2.48%   +0.42%  postgres[.] ExecEvalConst
 2.44%   -0.33%  postgres[.] heapgetpage
 2.34%   +0.11%  postgres[.] ExecScan
 2.14%   -0.20%  postgres[.] ExecStoreTuple

I.e. pretty random performance changes. This indeed looks like binary
layout changes.  Looking at these plans and at profiles spanning a run
of all queries shows that bitmap scans and hash aggregations, while
present, account for a very small amount of time in total. So tpc-ds
doesn't look particularly interesting to evaluate these patches - but
vey interesting for my slot deforming and qual evaluation patches.



Meh, that's annoying. Anyway, the reason why many of the TPC-DS queries 
can't really benefit from the patch is that a lot of the queries use 
grouping sets, and we only have sorted implementation for that.


I wonder whether the TPC-H differences are also affected by this ...


Btw, query_14.sql as generated by your templates (in pgperffarm) doesn't
seem to work here. And I never had the patience to run query_1.sql to
completion... Looks like we could very well use some planner
improvements here.



Yeah, planner improvements would be great ;-)

Query 14 needs an explicit cast to text, otherwise you get something 
like "can't cast unknown to text" error. Not sure if that's an expected 
issue or not, but the cast fixes it. I haven't pushed that to the 
pgperffarm repo yet, along with several other tooling fixes.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autonomous transactions

2016-10-11 Thread Merlin Moncure
On Tue, Oct 11, 2016 at 10:06 AM, Petr Jelinek  wrote:
> On 10/10/16 16:44, Merlin Moncure wrote:
>> On Thu, Oct 6, 2016 at 3:53 PM, Simon Riggs  wrote:
>>> On 6 October 2016 at 21:27, Robert Haas  wrote:
 I think we should implement background transactions and call them
 background transactions.  That allows us to expose additional
 functionality which is useful, like the ability to kick something off
 and check back later for the results.  There's no reason to call it
 background transactions and also call it autonomous transactions: one
 feature doesn't need two names.
>>>
>>> I'm happy to also invoke it via an alternate mechanism or API, so that
>>> it can continue to be used even if the above mechanism changes.
>>>
>>> We have no need to wait for the perfect solution, even assuming we
>>> would ever agree that just one exists.
>>
>> -1 on implementing both autonomous and background transactions.  This
>> will confuse everyone.
>
> I personally care much more about having background transactions than
> autonomous ones (as I only ever had use-cases for the background ones)
> so don't agree there.

All right.  But would you agree then that AT should at least emulate
competing implementations? A major advantage of bgworkers is possibly
supporting concurrent activity and maybe the syntax could be more
directed to possibly moving in that direction other than copying
oracle style (PRAGMA etc), particularly if the locking rules are
substantially different.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is it time to kill support for very old servers?

2016-10-11 Thread Tom Lane
I wrote:
> pg_dump alleges support for dumping from servers back to 7.0.  Would v10
> be a good time to remove some of that code?  It's getting harder and
> harder to even compile those ancient branches, let alone get people to
> test against them (cf. 4806f26f9).  My initial thought is to cut support
> for pre-7.3 or maybe pre-7.4 servers, as that would allow removal of
> support for cases where the server lacks schemas or pg_depend, each of
> which requires a fair deal of klugery in pg_dump.

Per subsequent discussion, we should use a round number as the version
cutoff, so attached is a patch that removes pg_dump/pg_dumpall support
for dumping from servers before 8.0.  This eliminates circa 1500 lines
of code, or about 4% of the existing code in src/bin/pg_dump/.  So it's
not a huge savings, but it eliminates a lot of hard-to-test cases.

I (think I) did not do anything that would damage pg_restore's ability to
read dump archives produced by pre-8.0 versions.  I'm pretty hesitant to
remove that code since (a) there's not all that much of it, and (b) it's
easy to imagine scenarios where an old backup dump is all someone's got.
However, that's another set of cases that's darn hard to test, so it's
somewhat questionable whether we may have broken it already.

Comments, objections?

regards, tom lane



pg_dump-drop-old-server-support.patch.gz
Description: pg_dump-drop-old-server-support.patch.gz

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Merge Join with an Index Optimization

2016-10-11 Thread Claudio Freire
On Sun, Sep 11, 2016 at 1:20 PM, Tom Lane  wrote:
> Michael Malis  writes:
>> As I understand it, a merge join will currently read all tuples from both
>> subqueries (besides early termination). I believe it should be possible to
>> take advantages of the indexes on one or both of the tables being read from
>> to skip a large number of tuples that would currently be read.
>
> IIUC, what you're proposing is to replace "read past some tuples in the
> index" with "re-descend the index btree".  Yes, that would win if it
> allows skipping over a large number of index entries, but it could lose
> big-time if not.  The difficulty is that I don't see how the merge join
> level could predict whether it would win for any particular advance step.
> You'd really need most of the smarts to be in the index AM.
>
> You might want to troll the archives for past discussions of index skip
> scan, which is more or less the same idea (could possibly be exactly the
> same idea, depending on how it's implemented).  I think we had arrived
> at the conclusion that re-descent from the root might be appropriate
> when transitioning to another index page, but not intra-page.  Anyway
> no one's produced a concrete patch yet.

Interesting it should be brought up.

I was considering the index skip optimization for vacuum at some
point, might as well consider it for regular scans too if I get to
that.

Basically, instead of a simple get next, I was considering adding a
"get next skip until". The caller would be the one responsible for
sending the hint, and the index am would be free to implement the skip
in a smart way if possible.

The interface for vacuum is a bit different in practice, but
conceptually the same.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is it time to kill support for very old servers?

2016-10-11 Thread Heikki Linnakangas

On 10/11/2016 08:23 PM, Tom Lane wrote:

Not sure where to go from here, but the idea of dropping V2 support
is seeming attractive again.  Or we could just continue the policy
of benign neglect.


Let's drop it.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Is it time to kill support for very old servers?

2016-10-11 Thread Tom Lane
Robert Haas  writes:
> On Fri, Oct 7, 2016 at 11:34 AM, Tom Lane  wrote:
>> The problem with letting it just sit there is that we're not, in fact,
>> testing it.  If we take the above argument seriously then we should
>> provide some way to configure libpq to prefer V2 and run regression
>> tests in that mode.  Otherwise, if/when we break it, we'll never know it
>> till we get field reports.

> I agree with that.  I think it would be fine to keep V2 support if
> somebody wants to do the work to let us have adequate test coverage,
> but if nobody volunteers I think we might as well rip it out.  I don't
> particularly enjoy committing things only to be told that they've
> broken something I can't test without unreasonable effort.

I experimented with this and found out that a majority of the regression
tests fall over, because the error messages come out formatted differently
when using V2.  You don't get separate DETAIL or HINT fields, nor cursor
positions, etc.  So there's boatloads of silly diffs like this:

***
*** 69,75 
  INSERT INTO testschema.atable VALUES(3);  -- ok
  INSERT INTO testschema.atable VALUES(1);  -- fail (checks index)
  ERROR:  duplicate key value violates unique constraint "anindex"
- DETAIL:  Key (column1)=(1) already exists.
  SELECT COUNT(*) FROM testschema.atable;   -- checks heap
   count 
  ---
--- 69,74 

Some of the tests have harder-to-ignore failures because protocol V2
had no way to recover from a failure during COPY IN, short of aborting
the connection:

--- 34,42 
  -- missing data: should fail
  COPY x from stdin;
  ERROR:  invalid input syntax for integer: ""
! FATAL:  terminating connection because protocol synchronization was lost
  COPY x from stdin;
! server closed the connection unexpectedly
!   This probably means the server terminated abnormally
!   before or while processing the request.
! connection to server was lost


At this point I'm feeling pretty discouraged about testability of V2.
I can't see us maintaining a variant regression test suite that would
accommodate these issues.  So the easy idea of "build with some extra
#define and then run the usual regression tests" is out.

Now, we hardly need the entire regression suite to provide adequate
coverage of V2 protocol.  You could imagine a small dedicated test
that just checks basic commands, errors, notices, COPY.  The problem
is that we'd need infrastructure in either the backend or libpq to
dynamically force use of V2 for that test, and that's work that I for
one don't really care to put in.

Not sure where to go from here, but the idea of dropping V2 support
is seeming attractive again.  Or we could just continue the policy
of benign neglect.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Macro customizable hashtable / bitmapscan & aggregation perf

2016-10-11 Thread Andres Freund
On 2016-10-11 04:29:31 +0200, Tomas Vondra wrote:
> On 10/11/2016 04:07 AM, Andres Freund wrote:
> > On 2016-10-10 17:46:22 -0700, Andres Freund wrote:
> > > > TPC-DS (tpcds.ods)
> > > > --
> > > > 
> > > > In this case, I'd say the results are less convincing. There are quite 
> > > > a few
> > > > queries that got slower by ~10%, which is well above - for example 
> > > > queries
> > > > 22 and 67. There are of course queries that got ~10% faster, and in 
> > > > total
> > > > the patched version executed more queries (so overall the result is 
> > > > slightly
> > > > positive, but not significantly).
> > > 
> > > That's interesting. I wonder whether that's plan changes just due to the
> > > changing memory estimates, or what's causing that. I'll look into it.
> > 
> > Hm. Based on an initial look those queries aren't planned with any of
> > the affected codepaths.  Could this primarily be a question of
> > randomness? Would it perhaps make sense to run the tests in a comparable
> > order? Looking at tpcds.py and the output files, it seems that the query
> > order differes between the runs, that can easily explain bigger
> > difference than the above. For me a scale=1 run creates a database of
> > approximately 4.5GB, thus with shared_buffers=1GB execution order is
> > likely to have a significant performance impact.
> > 
> 
> Yes, I see similar plans (no bitmap index scans or hash aggregates). But the
> difference is there, even when running the query alone (so it's not merely
> due to the randomized ordering).

> I wonder whether this is again due to compiler moving stuff around.

It looks like that. I looked through a significant set of plans where
there we time differences (generated on my machine), and none of them
had bitmap or hash groupings to any significant degree.  Comparing
profiles in those cases usually shows a picture like:
24.98%   +0.32%  postgres[.] slot_deform_tuple
16.58%   -0.05%  postgres[.] ExecMakeFunctionResultNoSets
12.41%   -0.01%  postgres[.] slot_getattr
 6.10%   +0.39%  postgres[.] heap_getnext
 4.41%   -0.37%  postgres[.] ExecQual
 3.08%   +0.12%  postgres[.] ExecEvalScalarVarFast
 2.85%   -0.11%  postgres[.] check_stack_depth
 2.48%   +0.42%  postgres[.] ExecEvalConst
 2.44%   -0.33%  postgres[.] heapgetpage
 2.34%   +0.11%  postgres[.] ExecScan
 2.14%   -0.20%  postgres[.] ExecStoreTuple

I.e. pretty random performance changes. This indeed looks like binary
layout changes.  Looking at these plans and at profiles spanning a run
of all queries shows that bitmap scans and hash aggregations, while
present, account for a very small amount of time in total. So tpc-ds
doesn't look particularly interesting to evaluate these patches - but
vey interesting for my slot deforming and qual evaluation patches.

Btw, query_14.sql as generated by your templates (in pgperffarm) doesn't
seem to work here. And I never had the patience to run query_1.sql to
completion... Looks like we could very well use some planner
improvements here.


Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] autonomous transactions

2016-10-11 Thread Pavel Stehule
2016-10-11 17:06 GMT+02:00 Petr Jelinek :

> On 10/10/16 16:44, Merlin Moncure wrote:
> > On Thu, Oct 6, 2016 at 3:53 PM, Simon Riggs 
> wrote:
> >> On 6 October 2016 at 21:27, Robert Haas  wrote:
> >>> I think we should implement background transactions and call them
> >>> background transactions.  That allows us to expose additional
> >>> functionality which is useful, like the ability to kick something off
> >>> and check back later for the results.  There's no reason to call it
> >>> background transactions and also call it autonomous transactions: one
> >>> feature doesn't need two names.
> >>
> >> I'm happy to also invoke it via an alternate mechanism or API, so that
> >> it can continue to be used even if the above mechanism changes.
> >>
> >> We have no need to wait for the perfect solution, even assuming we
> >> would ever agree that just one exists.
> >
> > -1 on implementing both autonomous and background transactions.  This
> > will confuse everyone.
> >
>
> I personally care much more about having background transactions than
> autonomous ones (as I only ever had use-cases for the background ones)
> so don't agree there.
>

we can, we should to have both - background can be used for paralelism,
autonomous for logging.

they are not 100% replaceable.

Regards

Pavel


>
> --
>   Petr Jelinek  http://www.2ndQuadrant.com/
>   PostgreSQL Development, 24x7 Support, Training & Services
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] autonomous transactions

2016-10-11 Thread Petr Jelinek
On 10/10/16 16:44, Merlin Moncure wrote:
> On Thu, Oct 6, 2016 at 3:53 PM, Simon Riggs  wrote:
>> On 6 October 2016 at 21:27, Robert Haas  wrote:
>>> I think we should implement background transactions and call them
>>> background transactions.  That allows us to expose additional
>>> functionality which is useful, like the ability to kick something off
>>> and check back later for the results.  There's no reason to call it
>>> background transactions and also call it autonomous transactions: one
>>> feature doesn't need two names.
>>
>> I'm happy to also invoke it via an alternate mechanism or API, so that
>> it can continue to be used even if the above mechanism changes.
>>
>> We have no need to wait for the perfect solution, even assuming we
>> would ever agree that just one exists.
> 
> -1 on implementing both autonomous and background transactions.  This
> will confuse everyone.
> 

I personally care much more about having background transactions than
autonomous ones (as I only ever had use-cases for the background ones)
so don't agree there.

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] int2vector and btree indexes

2016-10-11 Thread Tom Lane
Amit Langote  writes:
> I was wrong that the index *never* gets used.  It does in fact get used if
> the operator is an ordering search operator (<, <=, >, >=), in which case
> the query would use an array_ops operator (which is a btree operator class
> for type anyarray) and hence matches the index operator family.  I failed
> to mention in my original message that int2vector_ops is a hash operator
> class.  There is exactly one =(int2vector, int2vector) operator in the
> system of which there is no btree equivalent.

Hmm ... I kind of wonder why we have int2vectoreq or hashint2vector at
all, likewise the hash opclass based on them.  The code says that they
are needed to support catcache index columns, but the only columns of
this type are

regression=# select attrelid::regclass,attname from pg_attribute where atttypid 
= 'int2vector'::regtype;
  attrelid  |  attname  
+---
 pg_index   | indkey
 pg_index   | indoption
 pg_trigger | tgattr
(3 rows)

and those don't have indexes at all, let alone catcaches based on them.
So it looks to me like we could remove this infrastructure.  There is
value in being able to hash int2vectors during queries, for sure, but
we could let that be done by the anyarray hash opclass.

Having said that, int2vector is not meant as a user-facing type and so
I don't particularly care whether indexes built on it work conveniently.
But it looks to me like we've got some unnecessary code here.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Proposal: scan key push down to heap [WIP]

2016-10-11 Thread Dilip Kumar
Hi Hackers,

I would like to propose a patch for pushing down the scan key to heap.

Currently only in case of system table scan keys are pushed down. I
have implemented the POC patch to do the same for normal table scan.

This patch will extract the expression from qual and prepare the scan
keys. Currently in POC version I have only supported  "var OP const"
type of qual, because these type of quals can be pushed down using
existing framework.

Purpose of this work is to first implement the basic functionality and
analyze the results. If results are good then we can extend it for
other type of expressions.

However in future when we try to expand the support for complex
expressions, then we need to be very careful while selecting
pushable expression. It should not happen that we push something very
complex, which may cause contention with other write operation (as
HeapKeyTest is done under page lock).

Performance Test: (test done in local machine, with all default setting).

Setup:
--

create table test(a int, b varchar, c varchar, d int);
insert into test values(generate_series(1,1000), repeat('x', 30),
repeat('y', 30), generate_series(1,1000));
analyze test;

Test query:
--
select count(*) from test where a < $1;

Results: (execution time in ms)

Selectivity   Head(ms)   Patch(ms)gain
0.01612 307  49%
0.1  623 353  43%
0.2  645 398  38%
0.5  780 535  31%
0.8  852 590  30%
1 913 730  20%

Instructions: (Cpu instructions measured with callgrind tool):

Quary :  select count(*) from test where a < 10;

Head: 10,815,730,925
Patch: 4,780,047,331

Summary:
--
 1.  ~50% reduction in both instructions as well as execution time.
 2. Here we can see ~ 20% execution time reduction even at selectivity
1 (when all tuples are selected). And, reasoning for the same can be
that HeapKeyTest is much simplified compared to ExecQual. It's
possible that in future when we try to support more variety of keys,
gain at high selectivity may come down.

WIP patch attached..

Thoughts ?

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


heap_scankey_pushdown_v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Quorum commit for multiple synchronous replication.

2016-10-11 Thread Michael Paquier
On Tue, Oct 11, 2016 at 6:08 PM, Michael Paquier
 wrote:
> On Tue, Oct 11, 2016 at 4:18 PM, Masahiko Sawada  
> wrote:
>>> You may want to add an assert in
>>> SyncRepGetSyncStandbysPriority and SyncRepGetSyncStandbysQuorum to be
>>> sure that they are getting called for the correct method.
>>> +   /* Sort each array in descending order to get 'pos' newest element */
>>> +   qsort(write_array, len, sizeof(XLogRecPtr), cmp_lsn);
>>> +   qsort(flush_array, len, sizeof(XLogRecPtr), cmp_lsn);
>>> +   qsort(apply_array, len, sizeof(XLogRecPtr), cmp_lsn);
>>> There is no need to reorder things again and to use arrays, you can
>>> choose the newest LSNs when scanning the WalSnd entries.
>>
>> I considered it that but it depends on performance.
>> Current patch avoids O(N*M).
>
> I am surprised by this statement. You would have O(N) by just
> discarding the oldest LSN values while holding the spinlock of each
> WAL sender. What SyncRepGetNNewestSyncRecPtr looks for are just the
> newest apply, write and flush positions.

Bah, stupid. I just missed the point with 'pos'. Now I see the trick.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Quorum commit for multiple synchronous replication.

2016-10-11 Thread Michael Paquier
On Tue, Oct 11, 2016 at 4:18 PM, Masahiko Sawada  wrote:
>> You may want to add an assert in
>> SyncRepGetSyncStandbysPriority and SyncRepGetSyncStandbysQuorum to be
>> sure that they are getting called for the correct method.
>> +   /* Sort each array in descending order to get 'pos' newest element */
>> +   qsort(write_array, len, sizeof(XLogRecPtr), cmp_lsn);
>> +   qsort(flush_array, len, sizeof(XLogRecPtr), cmp_lsn);
>> +   qsort(apply_array, len, sizeof(XLogRecPtr), cmp_lsn);
>> There is no need to reorder things again and to use arrays, you can
>> choose the newest LSNs when scanning the WalSnd entries.
>
> I considered it that but it depends on performance.
> Current patch avoids O(N*M).

I am surprised by this statement. You would have O(N) by just
discarding the oldest LSN values while holding the spinlock of each
WAL sender. What SyncRepGetNNewestSyncRecPtr looks for are just the
newest apply, write and flush positions.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] memory leak in e94568ecc10 (pre-reading in external sort)

2016-10-11 Thread Heikki Linnakangas

On 10/11/2016 12:56 AM, Peter Geoghegan wrote:

Also, something about trace_sort here:


+#ifdef TRACE_SORT
+   if (trace_sort)
+   elog(LOG, "using " INT64_FORMAT " KB of memory for read buffers among %d 
input tapes",
+(state->availMem) / 1024, numInputTapes);
+#endif
+
+   state->read_buffer_size = state->availMem / numInputTapes;
+   USEMEM(state, state->availMem);


Maybe you should just make the trace_sort output talk about blocks at
this point?


With # of blocks, you then have to mentally divide by 8 to get the 
actual memory used. I think kB is nicer in messages that are meant to be 
read by humans.


The bigger problem with this message is that it's not very accurate 
anymore. The actual amount of memory used is rounded down, and capped by 
MaxAllocSize*numInputTapes. And would it be better to print the per-tape 
buffer size, rather than the total?


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] int2vector and btree indexes

2016-10-11 Thread Amit Langote
On 2016/10/11 15:58, Amit Langote wrote:
> If I create btree index on a int2vector column, it does not get used for
> queries because the query search always fails to match the index operator
> (family).
> 
> During index creation, GetDefaultOpClass() returns array_ops for a
> int2vector index column, because type int2vector is binary-coercible with
> anyarray (which is array_ops's input type).  Whereas queries involving
> int2vector columns would use a int2vector_ops operator.

I was wrong that the index *never* gets used.  It does in fact get used if
the operator is an ordering search operator (<, <=, >, >=), in which case
the query would use an array_ops operator (which is a btree operator class
for type anyarray) and hence matches the index operator family.  I failed
to mention in my original message that int2vector_ops is a hash operator
class.  There is exactly one =(int2vector, int2vector) operator in the
system of which there is no btree equivalent.

I guess there is not much to complaint about here after all.  Sorry about
the noise.

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] kqueue

2016-10-11 Thread Torsten Zuehlsdorff

On 28.09.2016 23:39, Thomas Munro wrote:

On Thu, Sep 29, 2016 at 9:09 AM, Keith Fiske  wrote:

On Thu, Sep 15, 2016 at 11:11 PM, Thomas Munro
 wrote:

Ok, here's a version tweaked to use EVFILT_PROC for postmaster death
detection instead of the pipe, as Tom Lane suggested in another
thread[1].

[...]


Ran benchmarks on unaltered 96rc1 again just to be safe. Those are first.
Decided to throw a 32 process test in there as well to see if there's
anything going on between 4 and 64


Thanks!  A summary:

[summary]

The variation in the patched 64 client numbers is quite large, ranging
from ~66.5k to ~79.5k.  The highest number matched the unpatched
numbers which ranged 77.9k to 80k.  I wonder if that is noise and we
need to run longer (in which case the best outcome might be 'this
patch is neutral on FreeBSD'), or if something the patch does is doing
is causing that (for example maybe EVFILT_PROC proc filters causes
contention on the process table lock).

[..]

It's difficult to draw any conclusions at this point.


I'm currently setting up a new FreeBSD machine. Its a FreeBSD 11 with 
ZFS, 64 GB RAM and Quad Core. If you're interested in i can give you 
access for more tests this week. Maybe this will help to draw any 
conclusion.


Greetings,
Torsten


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: schema PL session variables

2016-10-11 Thread Pavel Stehule
Hi

2016-02-23 20:52 GMT+01:00 Pavel Stehule :

>
>
> 2016-02-12 22:41 GMT+01:00 Jim Nasby :
>
>> On 2/12/16 2:58 PM, Pavel Stehule wrote:
>>
>>>
>>> So I think adding something like this needs to at least address
>>> *how* SQL level access would work, *when* it's eventually
>>> implemented.
>>>
>>>
>>> I understand - and I agree.
>>>
>>> small note: Private variables should not be executed from any SQL,
>>> because SQL has not directly related schema. It can be executed only
>>> from SQL embedded in some object with attached schema - PL functions,
>>> views, constraints, .. So for this use case, the important information
>>> is info about a container. We have to hold info about related schema in
>>> planner/executor context.
>>>
>>
>> I think that's probably true, but this also shows why we need to consider
>> different PLs too. As it stands right now, the only way to access a
>> variable outside of plpgsql would be to call a plpgsql function, and
>> currently there's no way to make a plpgsql function private. So for this to
>> work, private variables probably need to be exposed directly through the pl
>> handler.
>>
>> Again, I'm not saying this all has to be implemented up front, but there
>> needs to be a plan for how it would work.
>>
>> I think it would be a good idea to start a wiki page on this topic to
>> start collecting stuff together.
>
>
> I wrote some basic info - https://wiki.postgresql.org/
> wiki/CREATE_PRIVATE_VARIABLE
>

I though about this feature and now I am thinking so it is really similar
to sequences. Sure there are differences - but a workflow is pretty
similar. Created, dropped by CREATE, DROP statements, accessed with
functions everywhere (and in some iteration directly in PLpgSQL). The
content can be of any type stored in memory - session or transaction
closed. In first iteration initialized on default value when it is first
accessed in scope. Accessibility can be controlled by rights to schema.

syntax:

CREATE (SESSION|TRANSACTION) VARIABLE schema.xx datatype DEFAULT ...;
DROP VARIABLE schema.xx;

Access:

SELECT setvar(regclass, "any"); -- supported by Parser - enforcing "any" to
datatype
SELECT getvar(regclass) -- returns "any" -- supported by Parser --
enforcing "any" to datatype

The access rights on variables can be exactly same like rights on sequences.

Regards

Pavel





>
>
> I changed my opinion on initialization part. The private variables with
> non null default should be initialized in session start. It is much more
> practical and it can be used for triggering some ON CONNECT custom routines.
>
> Regards
>
> Pavel
>
>
>
>>
>> --
>> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
>> Experts in Analytics, Data Architecture and PostgreSQL
>> Data in Trouble? Get it in Treble! http://BlueTreble.com
>>
>
>


Re: [HACKERS] proposal: psql \setfileref

2016-10-11 Thread Pavel Stehule
2016-10-11 9:32 GMT+02:00 Gilles Darold :

> Le 11/10/2016 à 07:53, Pavel Stehule a écrit :
>
>
>
> 2016-10-10 19:50 GMT+02:00 Pavel Stehule :
>
>>
>>
>> 2016-10-10 15:17 GMT+02:00 Gilles Darold :
>>
>>> Le 10/10/2016 à 14:38, Daniel Verite a écrit :
>>> >   Gilles Darold wrote:
>>> >
>>> >>postgres=# \setfileref b /dev/random
>>> >>postgres=# insert into test (:b);
>>> >>
>>> >> Process need to be killed using SIGTERM.
>>> > This can be fixed by setting sigint_interrupt_enabled to true
>>> > before operating on the file. Then ctrl-C would be able to cancel
>>> > the command.
>>>
>>> If we do not prevent the user to be able to load special files that
>>> would be useful yes.
>>>
>>
>> I don't think so special files has some sense, just I had not time fix
>> this issue. I will do it early - I hope.
>>
>
> fresh patch - only regular files are allowed
>
> Regards
>
> Pavel
>
>
> Hi,
>
> The patch works as expected, the last two remaining issues have been
> fixed. I've changed the status to "Ready for committers".
>
> Thanks Pavel.
>
Thank you very much

Regards

Pavel


> --
> Gilles Darold
> Consultant PostgreSQLhttp://dalibo.com - http://dalibo.org
>
>


Re: [HACKERS] proposal: psql \setfileref

2016-10-11 Thread Gilles Darold
Le 11/10/2016 à 07:53, Pavel Stehule a écrit :
>
>
> 2016-10-10 19:50 GMT+02:00 Pavel Stehule  >:
>
>
>
> 2016-10-10 15:17 GMT+02:00 Gilles Darold  >:
>
> Le 10/10/2016 à 14:38, Daniel Verite a écrit :
> >   Gilles Darold wrote:
> >
> >>postgres=# \setfileref b /dev/random
> >>postgres=# insert into test (:b);
> >>
> >> Process need to be killed using SIGTERM.
> > This can be fixed by setting sigint_interrupt_enabled to true
> > before operating on the file. Then ctrl-C would be able to
> cancel
> > the command.
>
> If we do not prevent the user to be able to load special files
> that
> would be useful yes.
>
>
> I don't think so special files has some sense, just I had not time
> fix this issue. I will do it early - I hope.
>
>
> fresh patch - only regular files are allowed
>
> Regards
>
> Pavel
>

Hi,

The patch works as expected, the last two remaining issues have been
fixed. I've changed the status to "Ready for committers".

Thanks Pavel.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



Re: [HACKERS] Quorum commit for multiple synchronous replication.

2016-10-11 Thread Masahiko Sawada
On Wed, Sep 28, 2016 at 5:14 PM, Michael Paquier
 wrote:
> On Sat, Sep 24, 2016 at 5:37 PM, Masahiko Sawada  
> wrote:
>> I still vote for changing behaviour of existing syntax 'k (n1, n2)' to
>> quorum commit.
>> That is,
>> 1.  'First k (n1, n2, n3)' means that the master server waits for ACKs
>> from k standby servers whose name appear earlier in the list.
>> 2.  'Any k (n1, n2, n3)' means that the master server waits for ACKs
>> from any k listed standby servers.
>> 3.  'n1, n2, n3' is the same as #1 with k=1.
>> 4.  '(n1, n2, n3)' is the same as #2 with k=1.
>
> OK, so I have done a review of this patch keeping that in mind as
> that's the consensus. I am still getting familiar with the code...

Thank you for reviewing!

> -transactions will wait until all the standby servers which are considered
> +transactions will wait until the multiple standby servers which
> are considered
> There is no real need to update this sentence.
>
> +FIRST means to control the standby servers with
> +different priorities. The synchronous standbys will be those
> +whose name appear earlier in this list, and that are both
> +currently connected and streaming data in real-time(as shown
> +by a state of streaming in the
> +
> +pg_stat_replication view). Other standby
> +servers appearing later in this list represent potential
> +synchronous standbys. If any of the current synchronous
> +standbys disconnects for whatever reason, it will be replaced
> +immediately with the next-highest-priority standby.
> +For example, a setting of FIRST 3 (s1, s2, s3, s4)
> +makes transaction commits wait until their WAL records are received
> +by three higher-priority standbys chosen from standby servers
> +s1, s2, s3 and s4.
> It does not seem necessary to me to enter in this level of details:
> The keyword FIRST, coupled with an integer number N, chooses the first
> N higher-priority standbys and makes transaction commit when their WAL
> records are received. For example FIRST 3 (s1, s2, s3, s4)
> makes transaction commits wait until their WAL records are received by
> the three high-priority standbys chosen from standby servers s1, s2,
> s3 and s4.

Will fix.

> +ANY means to control all of standby servers with
> +same priority. The master sever will wait for receipt from
> +at least num_sync
> +standbys, which is quorum commit in the literature. The all of
> +listed standbys are considered as candidate of quorum commit.
> +For example, a setting of  ANY 3 (s1, s2, s3, s4) makes
> +transaction commits wait until receiving receipts from at least
> +any three standbys of four listed servers s1,
> +s2, s3, s4.
>
> Similarly, something like that...
> The keyword ANY, coupled with an integer number N, chooses N standbys
> in a set of standbys with the same, lowest, priority and makes
> transaction commit when WAL records are received those N standbys. For
> example ANY 3(s1, s2, s3, s4) makes transaction commits wait until WAL
> records have been received from 3 servers in the set s1, s2, s3 and
> s4.

Will fix.

> It could be good also to mention that no keyword specified means ANY,
> which is incompatible with 9.6. The docs also miss the fact that if a
> simple list of servers is given, without parenthesis and keywords,
> this is equivalent to FIRST 1.

Right. I will add those documentations.

> -synchronous_standby_names = '2 (s1, s2, s3)'
> +synchronous_standby_names = 'First 2 (s1, s2, s3)'
> Nit here. It may be a good idea to just use upper-case characters in
> the docs, or just lower-case for consistency, but not mix both.
> Usually GUCs use lower-case characters.

Agree. Will fix.

> +  when standby is considered as a condidate of quorum commit.
> s/condidate/candidate/

Will fix.

> -syncrep_scanner.c: FLEXFLAGS = -CF -p
> +syncrep_scanner.c: FLEXFLAGS = -CF -p -i
> Hm... Is that actually a good idea? Now "NODE" and "node" are two
> different things for application_name, but with this patch both would
> have the same meaning. I am getting to think that we could just use
> the lower-case characters for the keywords any/first. Is this -i
> switch a problem for elements in standby_list?

The string of standby name is not changed actually, only the parser
doesn't distinguish between "NODE" and "node".
The values used for checking application_name will still works fine.
If we want to name "first" or "any" as the standby name then it should
be double quoted.

> + * Calculate the 'pos' newest Write, Flush and Apply positions among
> sync standbys.
> I don't understand this comment.
>
> +   if (SyncRepConfig->sync_method == SYNC_REP_PRIORITY)
> +   got_recptr = SyncRepGetOldestSyncRecPtr(, ,
> +, _sync);
> +   else /* SYNC_REP_QUORUM */
> + 

[HACKERS] int2vector and btree indexes

2016-10-11 Thread Amit Langote
If I create btree index on a int2vector column, it does not get used for
queries because the query search always fails to match the index operator
(family).

During index creation, GetDefaultOpClass() returns array_ops for a
int2vector index column, because type int2vector is binary-coercible with
anyarray (which is array_ops's input type).  Whereas queries involving
int2vector columns would use a int2vector_ops operator.

I wonder if the index creation command should rather fail because an index
thus created will never get used?  Or is_indexable_operator() should
somehow consider the fact that such indexes could in fact exist?

I might be missing something though.

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Forbid use of LF and CR characters in database and role names

2016-10-11 Thread Michael Paquier
On Tue, Oct 11, 2016 at 11:00 AM, Noah Misch  wrote:
> I count one person disfavoring the patch concept of rejecting these characters
> early, and I count two people, plus yourself as author, favoring it.
> Therefore, the patch can move forward with the proposed design.  The next
> step, then, is for the patch to park in a CommitFest until someone volunteers
> to review the implementation details.

OK, thanks. I thought there were more opposition to it. The original
patches still apply so I am adding it back to the next CF:
https://commitfest.postgresql.org/11/711/
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers