Re: [HACKERS] proposal: new long psql parameter --on-error-stop

2014-06-22 Thread MauMau

From: Pavel Stehule pavel.steh...@gmail.com
pg_dumpall aligns all options left with each other, whether they are 
short

or long.

 -x, --no-privileges  do not dump privileges (grant/revoke)
 --binary-upgrade for use by upgrade utilities only
 --column-inserts dump data as INSERT commands with column
names



ok

I fixed it


Thank you.  I marked this patch as ready for committer.

Regards
MauMau




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: new long psql parameter --on-error-stop

2014-06-22 Thread Pavel Stehule
2014-06-22 9:32 GMT+02:00 MauMau maumau...@gmail.com:

 From: Pavel Stehule pavel.steh...@gmail.com

  pg_dumpall aligns all options left with each other, whether they are short
 or long.

  -x, --no-privileges  do not dump privileges (grant/revoke)
  --binary-upgrade for use by upgrade utilities only
  --column-inserts dump data as INSERT commands with column
 names


 ok

 I fixed it


 Thank you.  I marked this patch as ready for committer.


Thank you very much

Pavel



 Regards
 MauMau





Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-22 Thread Dean Rasheed
On 17 June 2014 20:19, Robert Haas robertmh...@gmail.com wrote:
 On Fri, Jun 13, 2014 at 3:11 AM, Dean Rasheed dean.a.rash...@gmail.com 
 wrote:
 Yeah, I was thinking something like this could work, but I would go
 further. Suppose you had separate GRANTable privileges for direct
 access to individual tables, bypassing RLS, e.g.

   GRANT DIRECT SELECT|INSERT|UPDATE|DELETE ON table_name TO role_name

 So, is this one new privilege (DIRECT) or four separate new privileges
 that are variants of the existing privileges (DIRECT SELECT, DIRECT
 INSERT, DIRECT UPDATE, DIRECT DELETE)?


I was thinking it would be 4 new privileges, so that a user could for
example be granted DIRECT SELECT permission on a table, but not DIRECT
UPDATE.

On reflection though, I think I prefer the approach of allowing
multiple named security policies per table, because it gives the
planner more opportunity to optimize queries against specific RLS
quals, which won't work if the ACL logic is embedded in functions.
That seems like something that would have to be designed in now,
because it's difficult to see how you could add it later.

Managing policy names becomes an issue though, because if you have 2
tables each with 1 policy, but you give them different names, how can
the user querying the data specify that they want policy1 for table1
and policy2 for table2, possibly in the same query? I think that can
be made more manageable by making policies top-level objects that
exist independently of any particular tables. So you might do
something like:

\c - alice
CREATE POLICY policy1;
CREATE POLICY policy2;
ALTER TABLE t1 SET POLICY policy1 TO t1_quals;
ALTER TABLE t2 SET POLICY policy1 TO t2_quals;
...
GRANT SELECT ON TABLE t1, t2 TO bob USING policy1;
GRANT SELECT ON TABLE t1, t2 TO manager; -- Can use any policy, or
bypass all policies

Then a particular user would typically only have to set their policy
once per session, for accessing multiple tables:

\c - bob
SET rls_policy = policy1;
SELECT * FROM t1 JOIN t2; -- OK
SET rls_policy = policy2;
SELECT * FROM t1; -- ERROR: no permission to access t1 using policy2

or you'd be able to set a default policy for users, so that they
wouldn't need to explicitly choose one:

ALTER ROLE bob SET rls_policy = policy1;


Note that the syntax proposed elsewhere --- GRANT SELECT (polname) ON
TABLE tab TO role --- doesn't work because it conflicts with the
syntax for granting column privileges, so there needs to be a distinct
syntax for this, and I think it ought to ultimately allow things like

GRANT SELECT (col1, col2), UPDATE (col1) ON t1 TO bob USING policy1;

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: Built-in binning functions

2014-06-22 Thread Petr Jelinek

Hi,

On 21/06/14 20:41, Pavel Stehule wrote:

review: https://commitfest.postgresql.org/action/patch_view?id=1484



Thanks for review.



My comments:

* I miss in documentation description of implementation - its is based
on binary searching, and when second parameter is unsorted array, then
it returns some nonsense without any warning.



Right I did mean to mention that thresholds array must be sorted, but 
forgot about it when submitting.



* Description for anyelement is buggy twice times

varwidth_bucket(5.35::numeric, ARRAY[1, 3, 4, 6]::numeric)

probably should be varwidth_bucket(5.35::numeric, ARRAY[1, 3, 4,
6]::numeric[])

BUT it is converted to double precision, function with polymorphic
parameters is not used. So it not respects a widh_buckets model:

postgres=# \dfS width_bucket
List of functions
Schema   │ Name │ Result data type │
Argument data types  │  Type
┼──┼──┼───┼
  pg_catalog │ width_bucket │ integer  │ double precision,
double precision, double precision, integer │ normal
  pg_catalog │ width_bucket │ integer  │ numeric, numeric,
numeric, integer│ normal
(2 rows)

There should be a interface for numeric type too. I am sure so important
part of code for polymorphic type can be shared.



I wonder if it would be acceptable to just create pg_proc entry and 
point it to generic implementation (that's what I originally had, then I 
changed pg_proc entry to polymorphic types...)


--
 Petr Jelinek  http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RFC, POC] Don't require a NBuffer sized PrivateRefCount array of local buffer pins

2014-06-22 Thread Simon Riggs
On 9 April 2014 15:09, Tom Lane t...@sss.pgh.pa.us wrote:
 Andres Freund and...@2ndquadrant.com writes:
 On 2014-04-09 18:13:29 +0530, Pavan Deolasee wrote:
 An orthogonal issue I noted is that we never check for overflow in the ref
 count itself. While I understand overflowing int32 counter will take a
 large number of pins on the same buffer, it can still happen in the worst
 case, no ? Or is there a theoretical limit on the number of pins on the
 same buffer by a single backend ?

 I think we'll die much earlier, because the resource owner array keeping
 track of buffer pins will be larger than 1GB.

 The number of pins is bounded, more or less, by the number of scan nodes
 in your query plan.  You'll have run out of memory trying to plan the
 query, assuming you live that long.

ISTM that there is a strong possibility that the last buffer pinned
will be the next buffer to be unpinned. We can use that to optimise
this.

If we store the last 8 buffers pinned in the fast array then we will
be very likely to hit the right buffer just by scanning the array.

So if we treat the fast array as a circular LRU, we get
* pinning a new buffer when array has an empty slot is O(1)
* pinning a new buffer when array is full causes us to move the LRU
into the hash table and then use that element
* unpinning a buffer will most often be O(1), which then leaves an
empty slot for next pin

Doing it that way means all usage is O(1) apart from when we use 8
pins concurrently and that usage does not follow the regular pattern.

 The resource managers are interesting to bring up in this context.
 That mechanism didn't exist when PrivateRefCount was invented.
 Is there a way we could lay off the work onto the resource managers?
 (I don't see one right at the moment, but I'm under-caffeinated still.)

Me neither. Good idea, but I think it would take a lot of refactoring
to do that.


We need to do something about this. We have complaints (via Heikki)
that we are using too much memory in idle backends and small configs,
plus we know we are using too much memory in larger servers. Reducing
the memory usage here will reduce CPU L2 cache churn as well as
increasing available  RAM.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing join removals for more join types

2014-06-22 Thread Simon Riggs
On 17 June 2014 11:04, David Rowley dgrowle...@gmail.com wrote:
 On Wed, Jun 4, 2014 at 12:50 AM, Noah Misch n...@leadboat.com wrote:

 As a point of procedure, I recommend separating the semijoin support into
 its
 own patch.  Your patch is already not small; delaying non-essential parts
 will
 make the essential parts more accessible to reviewers.


 In the attached patch I've removed all the SEMI and ANTI join removal code
 and left only support for LEFT JOIN removal of sub-queries that can be
 proved to be unique on the join condition by looking at the GROUP BY and
 DISTINCT clause.

Good advice, we can come back for the others later.


 Example:

 SELECT t1.* FROM t1 LEFT OUTER JOIN (SELECT value,COUNT(*) FROM t2 GROUP BY
 value) t2 ON t1.id = t2.value;

Looks good on initial look.

This gets optimized...

EXPLAIN (COSTS OFF)
SELECT a.id FROM a
LEFT JOIN (SELECT b.id,1 as dummy FROM b INNER JOIN c ON b.id = c.id
GROUP BY b.id) b ON a.id = b.id AND b.dummy = 1;

does it work with transitive closure like this..

EXPLAIN (COSTS OFF)
SELECT a.id FROM a
LEFT JOIN (SELECT b.id,1 as dummy FROM b INNER JOIN c ON b.id = c.id
GROUP BY c.id) b ON a.id = b.id AND b.dummy = 1;

i.e. c.id is not in the join, but we know from subselect that c.id =
b.id and b.id is in the join

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_resetxlog to clear backup start/end locations.

2014-06-22 Thread Simon Riggs
On 13 June 2014 12:27, Fujii Masao masao.fu...@gmail.com wrote:

 I think that pg_resetxlog should reset backup locations by default
 since they are useless (rather harmful) after pg_resetxlog. Thought?

+1

Do we regard that point as a bug that should be backpatched?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Fix xpath() to return namespace definitions

2014-06-22 Thread Ali Akbar
2014-06-16 22:52 GMT+07:00 Abhijit Menon-Sen a...@2ndquadrant.com:

 Thanks for the patch, and welcome to Postgres development.



 I can confirm that it works fine. I have attached here a very slightly
 tweaked version of the patch (removed trailing whitespace, and changed
 some comment text).

 I'm marking this ready for committer.


Thanks for the review. Hope i will be able to contribute a little here and
there in the future.

-- 
Ali Akbar


Re: [HACKERS] How about a proper TEMPORARY TABLESPACE?

2014-06-22 Thread Stephen Frost
* Craig Ringer (cr...@2ndquadrant.com) wrote:
 On 06/18/2014 08:00 PM, Stephen Frost wrote:
  PG would need to enforce that it's only used for temporary objects
  as well, of course..  Or at least, that was my thinking on this.
 
 A way to put UNLOGGED objects in such a space and have them recovered
 if they vanish would also be valuable, IMO.

Interesting idea.

 Not necessarily in the same patch, I'd just rather keep it in mind so
 any chosen design doesn't preclude adding that later.

Perhaps we need a more complex definition than just temporary
tablespace, as in, we should have a way for users to say this
tablespace is allowed to have objects of type X, Y, Z?  I can see a
couple of ways to do that and I don't think it'd require much in the way
of changes to the current patch...

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] review: Built-in binning functions

2014-06-22 Thread Pavel Stehule
2014-06-22 13:02 GMT+02:00 Petr Jelinek p...@2ndquadrant.com:

 Hi,

 On 21/06/14 20:41, Pavel Stehule wrote:

 review: https://commitfest.postgresql.org/action/patch_view?id=1484


 Thanks for review.



 My comments:

 * I miss in documentation description of implementation - its is based
 on binary searching, and when second parameter is unsorted array, then
 it returns some nonsense without any warning.


 Right I did mean to mention that thresholds array must be sorted, but
 forgot about it when submitting.


  * Description for anyelement is buggy twice times

 varwidth_bucket(5.35::numeric, ARRAY[1, 3, 4, 6]::numeric)

 probably should be varwidth_bucket(5.35::numeric, ARRAY[1, 3, 4,
 6]::numeric[])

 BUT it is converted to double precision, function with polymorphic
 parameters is not used. So it not respects a widh_buckets model:

 postgres=# \dfS width_bucket
 List of functions
 Schema   │ Name │ Result data type │
 Argument data types  │  Type
 ┼──┼──┼─
 ──┼
   pg_catalog │ width_bucket │ integer  │ double precision,
 double precision, double precision, integer │ normal
   pg_catalog │ width_bucket │ integer  │ numeric, numeric,
 numeric, integer│ normal
 (2 rows)

 There should be a interface for numeric type too. I am sure so important
 part of code for polymorphic type can be shared.


 I wonder if it would be acceptable to just create pg_proc entry and point
 it to generic implementation (that's what I originally had, then I changed
 pg_proc entry to polymorphic types...)


probably not. But very simple wrapper is acceptable.

Pavel



 --
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services



Re: [Fwd: Re: [HACKERS] proposal: new long psql parameter --on-error-stop]

2014-06-22 Thread Pavel Stehule
Hello

third version with Erik's update

Thanks Erik

Regards

Pavel


2014-06-22 12:01 GMT+02:00 Erik Rijkers e...@xs4all.nl:

 Hi Pavel,

 It seems you overlooked the patch that I sent?

 There are some typo's in your patch (also in v2) like:

 PROPMPT1, PROPMT2, PROPMPT3

 SIGLELINE

 I fixed those in my patch and improved the text.
 Attached is the diff against your v1 patch


 Thanks,


 Erik








  Original Message
 -
 Subject: Re: [HACKERS] proposal: new long psql parameter --on-error-stop
 From:Erik Rijkers e...@xs4all.nl
 Date:Sun, June 22, 2014 01:33
 To:  Pavel Stehule pavel.steh...@gmail.com
 Cc:  MauMau maumau...@gmail.com
  Andrew Dunstan and...@dunslane.net
  Tom Lane t...@sss.pgh.pa.us
  Fabrízio Mello fabriziome...@gmail.com
  PostgreSQL Hackers pgsql-hackers@postgresql.org

 ---

 On Sun, June 22, 2014 00:10, Pavel Stehule wrote:

  [help-variables-01.patch ]

 +1.  This patch is a very useful improvement, IMHO.

 I edited the text somewhat; and removed some obvious typos.

 thanks,

 Erik Rijkers
 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers

commit 2d44ee1f1bf7f7b57e51522dca88b9f55b308e67
Author: Pavel Stehule pavel.steh...@gmail.com
Date:   Sun Jun 22 00:08:24 2014 +0200

show a list of psql internal, psql formatting and system variables used by psql

diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c
index 3aa3c16..7b026af 100644
--- a/src/bin/psql/help.c
+++ b/src/bin/psql/help.c
@@ -78,12 +78,13 @@ usage(void)
 	printf(_(  -f, --file=FILENAME  execute commands from file, then exit\n));
 	printf(_(  -l, --list   list available databases, then exit\n));
 	printf(_(  -v, --set=, --variable=NAME=VALUE\n
-			set psql variable NAME to VALUE\n));
+			set psql variable NAME to VALUE e.g.: -v ON_ERROR_STOP=1\n));
 	printf(_(  -V, --versionoutput version information, then exit\n));
 	printf(_(  -X, --no-psqlrc  do not read startup file (~/.psqlrc)\n));
 	printf(_(  -1 (\one\), --single-transaction\n
 			execute as a single transaction (if non-interactive)\n));
 	printf(_(  -?, --help   show this help, then exit\n));
+	printf(_(  --help-variables list of available configuration variables (options), then exit\n));
 
 	printf(_(\nInput and output options:\n));
 	printf(_(  -a, --echo-all   echo all input from script\n));
@@ -279,6 +280,81 @@ slashUsage(unsigned short int pager)
 }
 
 
+/*
+ * show list of available variables (options) from command line
+ */
+void
+help_variables(void)
+{
+	printf(_(List of variables (options) for use from command line.\n));
+
+	printf(_(psql variables:\n));
+	printf(_(Usage:\n));
+	printf(_(  psql --set=NAME=VALUE\n  or \\set NAME VALUE in interactive mode\n\n));
+
+	printf(_(  AUTOCOMMIT successful SQL commands are automatically committed\n));
+	printf(_(  COMP_KEYWORD_CASE  determines which letter case to use when completing an SQL key word\n));
+	printf(_(  DBNAME name of currently connected database\n));
+	printf(_(  ECHO   write all input lines to standard output\n));
+	printf(_(  ECHO_HIDDENdisplay internal queries (same as -E option)\n));
+	printf(_(  ENCODING   current client character set encoding\n));
+	printf(_(  FETCH_COUNTthis many rows at a time (use less memory) (default 0 unlimited)\n));
+	printf(_(  HISTCONTROLwhen set, controls history list [ignorespace, ignoredups, ignoreboth]\n));
+	printf(_(  HISTFILE   file name used to store the history list\n));
+	printf(_(  HISTSIZE   the number of commands to store in the command history\n));
+	printf(_(  HOST   the currently connected database server\n));
+	printf(_(  IGNOREEOF  if unset, sending an EOF to interactive session terminates application\n));
+	printf(_(  LASTOIDthe value of last affected OID\n));
+	printf(_(  ON_ERROR_ROLLBACK  when on, ROLLBACK on error\n));
+	printf(_(  ON_ERROR_STOP  stop batch execution after error\n));
+	printf(_(  PORT   the database server port\n));
+	printf(_(  PROMPT1, PROMPT2, PROMPT3  specify the psql prompt\n));
+	printf(_(  QUIET  run quietly (same as -q option)\n));
+	printf(_(  SINGLELINE end of line terminates SQL command mode (same as -S option)\n));
+	printf(_(  SINGLESTEP confirm each query mode (same as -s option)\n));
+	printf(_(  USER   the database user currently connected\n));
+	printf(_(  VERBOSITY  control verbosity of error reports 

Re: [HACKERS] idle_in_transaction_timeout

2014-06-22 Thread Kevin Grittner
Abhijit Menon-Sen a...@2ndquadrant.com

 I've marked it Ready for Committer after a quick read-through.

I took a pass through it with an eye toward committing it.  I found
a couple minor whitespace issues, where the patch didn't follow
conventional indenting practice; I can fix that no problem.  I
found that as it stood, the patch reduced the number of user
timeouts which could be registered; I have a fix for that which I
hope will also prevent future problems in that regard.  None of
that would have held up pushing it.

I found one substantive issue that had been missed in discussion,
though.  The patch modifies the postgres_fdw extension to make it
automatically exempt from an attempt to set a limit like this on
the server to which it connects.  I'm not sure that's a good idea. 
Why should this type of connection be allowed to sit indefinitely
with an idle open transaction?  I'm inclined to omit this part of
the patch:

+++ b/contrib/postgres_fdw/connection.c
@@ -343,6 +343,13 @@ configure_remote_session(PGconn *conn)
    do_sql_command(conn, SET extra_float_digits = 3);
    else
    do_sql_command(conn, SET extra_float_digits = 2);
+
+   /*
+    * Ensure the remote server doesn't kill us off if we remain idle in a
+    * transaction for too long.
+    */
+   if (remoteversion = 90500)
+   do_sql_command(conn, SET idle_in_transaction_session_timeout = 0);
 }
 
 /*

(Please forgive any mangling of the whitespace above by my email
provider.)

Thoughts?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RFC, POC] Don't require a NBuffer sized PrivateRefCount array of local buffer pins

2014-06-22 Thread Andres Freund
On 2014-06-22 12:38:04 +0100, Simon Riggs wrote:
 On 9 April 2014 15:09, Tom Lane t...@sss.pgh.pa.us wrote:
  Andres Freund and...@2ndquadrant.com writes:
  On 2014-04-09 18:13:29 +0530, Pavan Deolasee wrote:
  An orthogonal issue I noted is that we never check for overflow in the ref
  count itself. While I understand overflowing int32 counter will take a
  large number of pins on the same buffer, it can still happen in the worst
  case, no ? Or is there a theoretical limit on the number of pins on the
  same buffer by a single backend ?
 
  I think we'll die much earlier, because the resource owner array keeping
  track of buffer pins will be larger than 1GB.
 
  The number of pins is bounded, more or less, by the number of scan nodes
  in your query plan.  You'll have run out of memory trying to plan the
  query, assuming you live that long.
 
 ISTM that there is a strong possibility that the last buffer pinned
 will be the next buffer to be unpinned. We can use that to optimise
 this.

 If we store the last 8 buffers pinned in the fast array then we will
 be very likely to hit the right buffer just by scanning the array.
 
 So if we treat the fast array as a circular LRU, we get
 * pinning a new buffer when array has an empty slot is O(1)
 * pinning a new buffer when array is full causes us to move the LRU
 into the hash table and then use that element
 * unpinning a buffer will most often be O(1), which then leaves an
 empty slot for next pin
 
 Doing it that way means all usage is O(1) apart from when we use 8
 pins concurrently and that usage does not follow the regular pattern.

Even that case is O(1) in the average case since insertion into a
hashtable is O(1) on average...

I've started working on a patch that pretty much works like that. It
doesn't move things around in the array, because that seemed to perform
badly. That seems to make sense, because it'd require moving entries in
the relatively common case of two pages being pinned.
It moves one array entry (chosen by [someint++ % NUM_ENTRIES] and moves
it to the hashtable and puts the new item in the now free slot. Same
happens if a lookup hits an entry from the hashtable. It moves one
entry from the array into the hashtable and puts the entry from the
hashtable in the free slot.
That seems to work nicely, but needs some cleanup. And benchmarks.

 We need to do something about this. We have complaints (via Heikki)
 that we are using too much memory in idle backends and small configs,
 plus we know we are using too much memory in larger servers. Reducing
 the memory usage here will reduce CPU L2 cache churn as well as
 increasing available  RAM.

Yea, the buffer pin array currently is one of the biggest sources of
cache misses... In contrast to things like the buffer descriptors it's
not even shared between concurrent processes, so it's more wasteful,
even if small.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: Built-in binning functions

2014-06-22 Thread Andres Freund
Hi,

On 2014-06-21 20:41:43 +0200, Pavel Stehule wrote:
 review: https://commitfest.postgresql.org/action/patch_view?id=1484

Can you please not start new threads for reviews of smaller features?
Doing so makes following the discussion much harder. I'm fine with
changing the subject if the reply headers are left intact...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-22 Thread Andres Freund
On 2014-06-21 11:23:44 -0700, Kevin Grittner wrote:
 Andrew Dunstan and...@dunslane.net wrote:
 
 
  On 06/19/2014 06:33 PM, Josh Berkus wrote:
 
  ISTM our realistic options are for seconds or msec as the unit.  If it's
  msec, we'd be limited to INT_MAX msec or around 600 hours at the top end,
  which seems like enough to me but maybe somebody thinks differently?
  Seconds are probably OK but I'm worried about somebody complaining that
  that's not enough resolution, especially as machines get faster.
  I can picture a 500ms timeout more readily than I can picture a 1000hr
  timeout.
 
  As long as we can specify the units, and don't have to say 1000 to mean
  1 second, I agree. I would normally expect this to be set in terms of
  minutes rather than millisecs.
 
 
 OK, so I think we want to see a patch based on v1 (FATAL approach)
 with a change of the name to idle_in_transaction_session_timeout
 and the units changed to milliseconds.

The idea with the GUC name is that if we ever get support for cancelling
transactions we can name that idle_in_transaction_transaction_timeout?
That seems a bit awkward...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tab completion for setting search_path

2014-06-22 Thread Andres Freund
On 2014-05-05 09:10:17 -0700, Jeff Janes wrote:
 On Sat, May 3, 2014 at 1:11 AM, Andres Freund and...@2ndquadrant.comwrote:
 
  On 2014-05-03 00:13:45 -0700, Jeff Janes wrote:
   On Friday, May 2, 2014, Jeff Janes jeff.ja...@gmail.com wrote:
  Why should we exclude system schemata? That seems more likely to be
  confusing than helpful? I can see a point in excluding another backend's
  temp tables, but otherwise?
 
 
 I've personally never had a need to set the search_path to a system schema,
 and I guess I was implicitly modelling this on what is returned by \dn, not
 by \dnS.   I wouldn't object much to including them; that would be better
 than not having any completion.  I just don't see much point.
 
 And now playing a bit with the system ones, I think it would be more
 confusing to offer them.  pg_catalog and pg_temp_appropriate always get
 searched, whether you put them in the search_path or not.

I thought about committing this but couldn't get over this bit. If you
type SELECT * FROM pg_cattab it'll get autocompleted to
pg_catalog.pg_ and pg_temptab will list all the temp schemas
including the numeric and toast ones. So we have precedent for *not*
bothering about excluding any schemas. I don't think we should start
doing so in a piecemal fashion in an individual command's completion.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-22 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:

 The idea with the GUC name is that if we ever get support for
 cancelling transactions we can name that
 idle_in_transaction_transaction_timeout?
 That seems a bit awkward...

No, the argument was that for all the other *_timeout settings what
came before _timeout was the thing that was being terminated.  I
think there were some votes in favor of the name on that basis, and
none against.  Feel free to give your reasons for supporting some
other name.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-22 Thread Andres Freund
On 2014-06-22 09:27:24 -0700, Kevin Grittner wrote:
 Andres Freund and...@2ndquadrant.com wrote:
 
  The idea with the GUC name is that if we ever get support for
  cancelling transactions we can name that
  idle_in_transaction_transaction_timeout?
  That seems a bit awkward...
 
 No, the argument was that for all the other *_timeout settings what
 came before _timeout was the thing that was being terminated.  I
 think there were some votes in favor of the name on that basis, and
 none against.  Feel free to give your reasons for supporting some
 other name.

My reasons for not liking the current GUC name are hinted at above. I think
we'll want a version of this that just fails the transaction once we
have the infrastructure. So we should choose a name that allows for
a complimentary GUC.
CAKFQuwZCg2uur=tudz_c2auwbo87offghn9mx_hz4qd-b8f...@mail.gmail.com
suggested
On 2014-06-19 10:39:48 -0700, David G Johnston wrote:
 idle_in_transaction_timeout=10s
 idle_in_transaction_target=session|transaction

but I don't like that much. Not sure what'd be good, the best I
currently can come up with is:
idle_in_transaction_termination_timeout =
idle_in_transaction_cancellation_timeout =

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-22 Thread Pavel Stehule
2014-06-22 19:47 GMT+02:00 Andres Freund and...@2ndquadrant.com:

 On 2014-06-22 09:27:24 -0700, Kevin Grittner wrote:
  Andres Freund and...@2ndquadrant.com wrote:
 
   The idea with the GUC name is that if we ever get support for
   cancelling transactions we can name that
   idle_in_transaction_transaction_timeout?
   That seems a bit awkward...
 
  No, the argument was that for all the other *_timeout settings what
  came before _timeout was the thing that was being terminated.  I
  think there were some votes in favor of the name on that basis, and
  none against.  Feel free to give your reasons for supporting some
  other name.

 My reasons for not liking the current GUC name are hinted at above. I think
 we'll want a version of this that just fails the transaction once we
 have the infrastructure. So we should choose a name that allows for
 a complimentary GUC.
 CAKFQuwZCg2uur=tudz_c2auwbo87offghn9mx_hz4qd-b8f...@mail.gmail.com
 suggested
 On 2014-06-19 10:39:48 -0700, David G Johnston wrote:
  idle_in_transaction_timeout=10s
  idle_in_transaction_target=session|transaction

 but I don't like that much. Not sure what'd be good, the best I
 currently can come up with is:
 idle_in_transaction_termination_timeout =
 idle_in_transaction_cancellation_timeout =


+1

Pavel



 Greetings,

 Andres Freund

 --
  Andres Freund http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] [RFC, POC] Don't require a NBuffer sized PrivateRefCount array of local buffer pins

2014-06-22 Thread Simon Riggs
On 22 June 2014 16:09, Andres Freund and...@2ndquadrant.com wrote:

 So if we treat the fast array as a circular LRU, we get
 * pinning a new buffer when array has an empty slot is O(1)
 * pinning a new buffer when array is full causes us to move the LRU
 into the hash table and then use that element
 * unpinning a buffer will most often be O(1), which then leaves an
 empty slot for next pin

 Doing it that way means all usage is O(1) apart from when we use 8
 pins concurrently and that usage does not follow the regular pattern.

 Even that case is O(1) in the average case since insertion into a
 hashtable is O(1) on average...

 I've started working on a patch that pretty much works like that. It
 doesn't move things around in the array, because that seemed to perform
 badly. That seems to make sense, because it'd require moving entries in
 the relatively common case of two pages being pinned.
 It moves one array entry (chosen by [someint++ % NUM_ENTRIES] and moves
 it to the hashtable and puts the new item in the now free slot. Same
 happens if a lookup hits an entry from the hashtable. It moves one
 entry from the array into the hashtable and puts the entry from the
 hashtable in the free slot.

Yes, that's roughly how the SLRU code works also, so sounds good.

 That seems to work nicely, but needs some cleanup. And benchmarks.

ISTM that microbenchmarks won't reveal the beneficial L2 and RAM
effects of the patch, so I suggest we just need to do a pgbench, a
2-way nested join and a 10-way nested join with an objective of no
significant difference or better. The RAM and L2 effects are enough to
justify this, since it will help with both very small and very large
configs.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RFC, POC] Don't require a NBuffer sized PrivateRefCount array of local buffer pins

2014-06-22 Thread Andres Freund
On 2014-06-22 19:31:34 +0100, Simon Riggs wrote:
 Yes, that's roughly how the SLRU code works also, so sounds good.

Heh. I rather see that as an argument for it sounding bad :)

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL access to database attributes

2014-06-22 Thread Pavel Stehule
Hello

I returned to review this patch after sleeping - and I have to say, these
patches doesn't break a compatibility.

This feature has two patches:
createdb_alterdb_grammar_refactoring.v1-1.patch and
database_attributes.v2-1.patch. First patch do some cleaning in gram rules
a CREATE DATABASE and ALTER DATABASE statements (and introduce a
CONNECTION_LIMIT property). Second patch introduces ALLOW_CONNECTIONS and
IS_TEMPLATE database properties. A motivation for these patches is cleaning
alterdb/createdb grammars and drop necessity to directly modify pg_database
table.

1. these patch does what was proposed, there was not any objection in
related discussion
2. I can apply these patches cleanly, a compilation was without new
warnings and without errors
3. all tests was passed
4. there is a necessary documentation (for new features)
5. a new syntax is actively used in initdb and pg_upgrade. I am not sure,
if some special test is necessary and if we are able to test it.

Refactoring of alterdb/createdb grammars has sense and we would it.

I found only one problem - first patch introduce a new property
CONNECTION_LIMIT and replace previously used CONNECTION LIMIT in
documentation. But CONNECTION LIMIT is still supported, but it is not
documented. So for some readers it can look like breaking compatibility,
but it is false. This should be documented better.

Regards

Pavel






2014-06-21 23:14 GMT+02:00 Vik Fearing vik.fear...@dalibo.com:

 On 06/21/2014 10:11 PM, Pavel Stehule wrote:
  Hello
 
  I am looking createdb_alterdb_grammar_refactoring.v1.patch
 
  http://www.postgresql.org/message-id/53868e57.3030...@dalibo.com

 Thank you for looking at this.

  Is any reason or is acceptable incompatible change CONNECTION_LIMIT
  instead CONNECTION LIMIT? Is decreasing parser size about 1% good enough
  for breaking compatibility?

 How is compatibility broken?  The grammar still accepts the old way, I
 just changed the documentation to promote the new way.

  Surely this patch cannot be backported what is proposed there.

 There are reasons I can think of not to backport this first patch, but
 breaking compatibility isn't one of them.
 --
 Vik



Re: [HACKERS] Allowing join removals for more join types

2014-06-22 Thread Simon Riggs
On 22 June 2014 12:51, Simon Riggs si...@2ndquadrant.com wrote:

 Looks good on initial look.

Tests 2 and 3 seem to test the same thing.

There are no tests which have multiple column clauselist/sortlists,
nor tests for cases where the clauselist is a superset of the
sortlist.

Test comments should refer to join removal rather than
optimization because we may forget which optimization they are there
to test.

It's not clear to me where you get the term sortclause from. This is
either the groupclause or distinctclause, but in the test cases you
provide this shows this has nothing at all to do with sorting since
there is neither an order by or a sorted aggregate anywhere near those
queries. Can we think of a better name that won't confuse us in the
future?

The comment Since a constant only has 1 value the existence of one here will
+ * not cause any duplication of the results. We'll simply ignore it!
would be better as We can ignore constants since they have only one
value and don't affect uniqueness of results.

The comment XXX is this comment still true?? can be removed since
its just a discussion point.

The comment beginning Currently, we only know how to remove left...
has rewritten a whole block of text just to add a few words in the
middle. We should rewrite the comment so it changes as few lines as
possible. Especially when that comment is going to be changed again
with your later patches. Better to have it provide a bullet point list
of things we know how to remove, so we can just add to it later.

Still looks good, other than the above.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GSoC] Clustering in MADlib - status update

2014-06-22 Thread Maxence Ahlouche
Hi!

Here's my report for week 5.

Week 5 - 2014/06/22

This week has been full of debugging of the main SQL function. The previous
week, I had been able to come up with a working function to compute a
medoid for a given group of points, but since then I've struggled to
integrate it with the rest of the SQL. Some errors were trivial (for
example some parameters that I had written with underscores instead of
using camelCase - Hai spotted this one, I think i'd never have found it by
myself), others less so. But it's coming!

According to the timeline I had planned at the beginning on the project,
I'm definitely late. The module I'm still writing should have been finished
last week, and it's not even working yet. It seems I've been far too
optimist in this timeline. For the second step, as I'll have less time than
expected, I'm thinking to switch from OPTICS to DBSCAN, which at least I
have fully understood (OPTICS is quite complicated). Is everyone ok with
this?

Next week is the evaluation week. Hopefully I'll be allowed to continue
working on this project, even though I haven't provided much result until
now :p As for me, I don't have to complain: I've always been provided
patience and clear answers to my questions. Only the phone calls didn't
turn as good as they sounded, but this problem will be fixed at our next
meeting, as we'll now use IRC!


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-22 Thread Stephen Frost
Dean,

* Dean Rasheed (dean.a.rash...@gmail.com) wrote:
 On 17 June 2014 20:19, Robert Haas robertmh...@gmail.com wrote:
  On Fri, Jun 13, 2014 at 3:11 AM, Dean Rasheed dean.a.rash...@gmail.com 
  wrote:
  Yeah, I was thinking something like this could work, but I would go
  further. Suppose you had separate GRANTable privileges for direct
  access to individual tables, bypassing RLS, e.g.
 
GRANT DIRECT SELECT|INSERT|UPDATE|DELETE ON table_name TO role_name
 
  So, is this one new privilege (DIRECT) or four separate new privileges
  that are variants of the existing privileges (DIRECT SELECT, DIRECT
  INSERT, DIRECT UPDATE, DIRECT DELETE)?
 
 I was thinking it would be 4 new privileges, so that a user could for
 example be granted DIRECT SELECT permission on a table, but not DIRECT
 UPDATE.

Ok.

 On reflection though, I think I prefer the approach of allowing
 multiple named security policies per table, because it gives the
 planner more opportunity to optimize queries against specific RLS
 quals, which won't work if the ACL logic is embedded in functions.

Having more than one policy for the purpose of performance really
doesn't make a huge amount of sense to me.  Perhaps someone could
explain the use-case with specific example applications where they would
benefit from this?  Based on the discussion, they would have to be OR'd
together in the query as built with any result being marked as success.
One could build an SQL function which could be in-lined potentially
which does the same if their case is that simple.

Being able to define the policy based on some criteria may allow it to
be simpler (eg: policy 'a' applies for certain roles, while policy 'b'
applies for other roles), but I'm not enthusiastic about that approach
because there could be a huge number of permutations to allow.

How about another approach- what about having a function which is called
(as the table owner, I'm thinking..) that then returns the qual to be
included, instead of having to define a specific qual which is included
in the catalog?  That function could take into consideration the user,
table, etc, and return a qual which includes constants to compare rows
against for planning purposes.  This would have to be done early enough,
of course, which might be difficult.  For my part, having that
capability would be neat, but nothing we're trying to do here would
preclude us from adding it later either.

 That seems like something that would have to be designed in now,
 because it's difficult to see how you could add it later.

I don't follow this at all.  Going from supporting one qual to
supporting multiple seems like it'd be quite straight-forward to add
in later?  Going the other way would be difficult.

 Managing policy names becomes an issue though, because if you have 2
 tables each with 1 policy, but you give them different names, how can
 the user querying the data specify that they want policy1 for table1
 and policy2 for table2, possibly in the same query? 

From my experience, users don't pick the policy any more than they get
to pick which set of permissions get applied to them when querying
tables (modulo roles, of course, but that's a mechanism for changing
users, not for saying which set of permissions you get).  All that you
describe could be done for regular permissions also, but we don't, and
I don't think we get complaints about that because we have roles-
which would work just the same for RLS (assuming the RLS policy defined
has a role component).

Having a function be able to be called to return a qual to be used would
be a way to have per-role RLS also, along with providing the flexibility
to have per-source-IP, per-connection-type, etc, RLS policies also.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] tab completion for setting search_path

2014-06-22 Thread Ian Barwick



On 23/06/14 00:58, Andres Freund wrote:

On 2014-05-05 09:10:17 -0700, Jeff Janes wrote:

On Sat, May 3, 2014 at 1:11 AM, Andres Freund and...@2ndquadrant.comwrote:


On 2014-05-03 00:13:45 -0700, Jeff Janes wrote:

On Friday, May 2, 2014, Jeff Janes jeff.ja...@gmail.com wrote:

Why should we exclude system schemata? That seems more likely to be
confusing than helpful? I can see a point in excluding another backend's
temp tables, but otherwise?



I've personally never had a need to set the search_path to a system schema,
and I guess I was implicitly modelling this on what is returned by \dn, not
by \dnS.   I wouldn't object much to including them; that would be better
than not having any completion.  I just don't see much point.

And now playing a bit with the system ones, I think it would be more
confusing to offer them.  pg_catalog and pg_temp_appropriate always get
searched, whether you put them in the search_path or not.


I thought about committing this but couldn't get over this bit. If you
type SELECT * FROM pg_cattab it'll get autocompleted to
pg_catalog.pg_ and pg_temptab will list all the temp schemas
including the numeric and toast ones. So we have precedent for *not*
bothering about excluding any schemas. I don't think we should start
doing so in a piecemal fashion in an individual command's completion.


There is an exception of sorts already for system schemas, in that although
SELECT * FROM ptab will list the system schemas, it will not list any
tables from them, and won't until SELECT * FROM pg_tab is entered
(see note in tab-completion.c around line 3722).

Personally I'd be mildly annoyed if every SET search_path TO ptab resulted
in all the system schemas being displayed when all I want is public; how
about having these listed only once pg_ is entered, i.e.
SET search_path TO pg_tab?

Regards

Ian Barwick







--
 Ian Barwick   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-22 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:

 I think we'll want a version of this that just fails the
 transaction once we have the infrastructure. So we should choose
 a name that allows for a complimentary GUC.

If we stick with the rule that what is to the left of _timeout is
what is being cancelled, the a GUC to cancel a transaction which
remains idle for too long could be called idle_transaction_timeout.

Do you disagree with the general idea of following that pattern?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-22 Thread David G Johnston
On Sunday, June 22, 2014, Kevin Grittner-5 [via PostgreSQL] 
ml-node+s1045698n580830...@n5.nabble.com wrote:

 Andres Freund [hidden email]
 http://user/SendEmail.jtp?type=nodenode=5808309i=0 wrote:

  I think we'll want a version of this that just fails the
  transaction once we have the infrastructure. So we should choose
  a name that allows for a complimentary GUC.

 If we stick with the rule that what is to the left of _timeout is
 what is being cancelled, the a GUC to cancel a transaction which
 remains idle for too long could be called idle_transaction_timeout.

 Do you disagree with the general idea of following that pattern?


If we ever do give the user an option the non-specific name with separate
type GUC could be used and this session specific variable deprecated.  And
disallow both to be active at the same time.  Or something else.  I agree
that idle_in_transaction_transaction would be proper but troublesome for
the alternative but crossing that bridge if we ever get there seems
reasonable in light of picking the best single name for this specific
feature.

Idle_transaction_timeout has already been discarded since truly idle
transactions are not being affected, only those that are in transaction.
 The first quote above is limited to that subset as well.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/idle-in-transaction-timeout-tp5805859p5808311.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.

Re: [HACKERS] tab completion for setting search_path

2014-06-22 Thread Tom Lane
Ian Barwick i...@2ndquadrant.com writes:
 On 23/06/14 00:58, Andres Freund wrote:
 I thought about committing this but couldn't get over this bit. If you
 type SELECT * FROM pg_cattab it'll get autocompleted to
 pg_catalog.pg_ and pg_temptab will list all the temp schemas
 including the numeric and toast ones. So we have precedent for *not*
 bothering about excluding any schemas. I don't think we should start
 doing so in a piecemal fashion in an individual command's completion.

 There is an exception of sorts already for system schemas, in that although
 SELECT * FROM ptab will list the system schemas, it will not list any
 tables from them, and won't until SELECT * FROM pg_tab is entered
 (see note in tab-completion.c around line 3722).

 Personally I'd be mildly annoyed if every SET search_path TO ptab resulted
 in all the system schemas being displayed when all I want is public; how
 about having these listed only once pg_ is entered, i.e.
 SET search_path TO pg_tab?

I think there is a pretty strong practical argument for excluding the
pg_temp and pg_toast schemas from completion for search_path, namely
that when does anyone ever need to include those in their search_path
explicitly?

The use-case for including pg_catalog in your path is perhaps a bit
greater, but not by much.

I'm not sure that what we do when auto-completing after FROM is an
entirely valid analogy, because the use-cases are different as a result
of the rules about schemas getting automatically included in your path.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-22 Thread Abhijit Menon-Sen
At 2014-06-22 19:45:08 -0700, david.g.johns...@gmail.com wrote:

 On Sunday, June 22, 2014, Kevin Grittner-5 [via PostgreSQL] 
 ml-node+s1045698n580830...@n5.nabble.com wrote:
 
  If we stick with the rule that what is to the left of _timeout is
  what is being cancelled, the a GUC to cancel a transaction which
  remains idle for too long could be called idle_transaction_timeout.

I (somewhat reluctantly) agree with Kevin that
idle_in_transaction_session_timeout (for FATAL) and
idle_transaction_timeout (for ERROR) would work.

The only other alternative I see is to use idle_transaction_timeout
now (even when we're killing the session) and later introduce another
setting named idle_transaction_timeout_keep_session (default false)
or something like that. (I'd prefer an extra boolean to something set
to 'session' or 'transaction'.)

 Idle_transaction_timeout has already been discarded since truly idle
 transactions are not being affected, only those that are in
 transaction.

I have no idea what this means.

-- Abhijit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_resetxlog to clear backup start/end locations.

2014-06-22 Thread Fujii Masao
On Sun, Jun 22, 2014 at 8:54 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On 13 June 2014 12:27, Fujii Masao masao.fu...@gmail.com wrote:

 I think that pg_resetxlog should reset backup locations by default
 since they are useless (rather harmful) after pg_resetxlog. Thought?

 +1

 Do we regard that point as a bug that should be backpatched?

Yep, I think so.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.5 CF1

2014-06-22 Thread Abhijit Menon-Sen
Hi.

One week into the CommitFest, we now have nine committed patches, ten
ready for committer, fourteen waiting on their author, and fifty-nine
still awaiting review.

Thanks to all the people who submitted a review (and a special mention
for MauMau for reviewing the most patches so far, of which one has been
committed and three others are ready for committer).

There are still twenty-one patches that do not have a reviewer. Here's
a selection:

KNN-GiST with recheck

http://archives.postgresql.org/message-id/CAPpHfdu_qBLNRnv-r=_tofZYYa6-r=z5_mgf4_phaokwcyx...@mail.gmail.com

XLogLockBlockRangeForCleanup
http://archives.postgresql.org/message-id/20140613084012.ga16...@toroid.org

UPDATE/DELETE .. ORDER BY .. LIMIT ..

http://archives.postgresql.org/message-id/1394662740.69204.yahoomail...@web163002.mail.bf1.yahoo.com

event triggers: more DROP info

http://archives.postgresql.org/message-id/20140613195049.gq18...@eldon.alvh.no-ip.org

WAL format  API changes
http://archives.postgresql.org/message-id/539cba75.3050...@vmware.com

Please feel free to contact me with questions.

-- Abhijit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Request for Patch Feedback: Lag Lead Window Functions Can Ignore Nulls

2014-06-22 Thread Abhijit Menon-Sen
Hi.

What's the status of this patch? Jeff, Álvaro, you're listed as
reviewers. Have you had a chance to look at the updated version
that Nick posted?

-- Abhijit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: tab completion for set search_path TO

2014-06-22 Thread Michael Paquier
On Sun, Jun 22, 2014 at 2:22 AM, Pavel Stehule pavel.steh...@gmail.com
wrote:

 Hello,

 this patch https://commitfest.postgresql.org/action/patch_view?id=1443 is
 trivial with zero risk.

 Patch is applicable without any issues, compilation was without any issues
 too.

Just wondering: why creating a new thread for a review and not reply
directly reply to the exiting one? This makes the review/patch submission
flow rather complicated to follow.
-- 
Michael


[HACKERS] Add a filed to PageHeaderData

2014-06-22 Thread Soroosh Sardari
Dear Hackers

I wanted to add a char array with length of 20 to PageHeaderData in
include/storage/bufpage.h.
Surprisingly regression test failed on rangetypes test!

The diff of resulted and expected file is :

*** 968,974 
  select count(*) from test_range_spgist where ir -|- int4range(100,500);
   count
  ---
!  5
  (1 row)

  -- now check same queries using a bulk-loaded index
--- 968,974 
  select count(*) from test_range_spgist where ir -|- int4range(100,500);
   count
  ---
!  2
  (1 row)

  -- now check same queries using a bulk-loaded index

==

Any help appreciated.


Soroosh Sardari