Re: [HACKERS] "RETURNING PRIMARY KEY" syntax extension

2014-06-24 Thread Rushabh Lathia
Hello All,

I assigned my self as reviewer of the patch. I gone through the
mail chain discussion and in that question has been raised about
the feature and its implementation, so would like to know what is
the current status of this project/patch.

Regards,


On Thu, Jun 12, 2014 at 5:53 PM, Ian Barwick  wrote:

> On 14/06/12 20:58, Jochem van Dieten wrote:
> > On Thu, Jun 12, 2014 at 12:25 PM, Ian Barwick wrote:
> >
> > On 14/06/12 18:46, Jochem van Dieten wrote:
> > > I haven't checked the code, but I am hoping it will help with the
> problem
> > > where a RETURNING * is added to a statement that is not an insert
> or update
> > > by the JDBC driver. That has been reported on the JDBC list at
> least twice,
> > > and the proposed workaround is neither very elegant nor very
> robust:
> > >
> https://groups.google.com/forum/#!msg/pgsql.interfaces.jdbc/7WY60JX3qyo/-v1fqDqLQKwJ
> >
> > Unfortunately that seems to be a JDBC-specific issue, which is
> outside
> > of the scope of this particular patch (which proposes additional
> server-side
> > syntax intended to make RETURNING * operations more efficient for
> > certain use cases, but which is in itself not a JDBC change).
> >
> >
> > But the obvious way to fix the JDBC issue is not to fix it by adding a
> 'mini parser' on
> > the JDBC side, but to make SELECT ... RETURNING PRIMARY KEY a regular
> select that silently
> > ignores the returning clause and doesn't throw an error on the
> server-side.
> >
> > That might still be outside the scope of this particular patch, but it
> would provide
> > (additional) justification if it were supported.
>
> That would be adding superfluous, unused and unusable syntax of no
> potential value
> (there is no SELECT ... RETURNING and it wouldn't make any sense if there
> was) as a
> workaround for a driver issue - not going to happen.
>
> Regards
>
> Ian Barwick
>
>
> --
>  Ian Barwick   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>



-- 
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] PATCH: Allow empty targets in unaccent dictionary

2014-06-24 Thread Abhijit Menon-Sen
Hi.

At 2014-04-20 01:06:43 +0200, alhash...@alhashash.net wrote:
>
> To use unaccent dictionary for these languages, we need to allow empty
> targets to remove diacritics instead of replacing them.

Your patch should definitely add a test case or two to sql/unaccent.sql
and expected/unaccent.out showing the behaviour that didn't work before
the change.

> The attached patch modfies unaacent.c so that dictionary parser uses
> zero-length target when the line has no target.

The basic idea seems sensible.

> diff --git a/contrib/unaccent/unaccent.c b/contrib/unaccent/unaccent.c
> old mode 100644
> new mode 100755
> index a337df6..4e72829
> --- a/contrib/unaccent/unaccent.c
> +++ b/contrib/unaccent/unaccent.c
> @@ -58,7 +58,9 @@ placeChar(TrieChar *node, unsigned char *str, int lenstr, 
> char *replaceTo, int r
>   {
>   curnode->replacelen = replacelen;
>   curnode->replaceTo = palloc(replacelen);
> - memcpy(curnode->replaceTo, replaceTo, replacelen);
> + /* palloc(0) returns a valid address, not NULL */
> + if (replaceTo) /* memcpy() is undefined for NULL 
> pointers*/
> + memcpy(curnode->replaceTo, replaceTo, 
> replacelen);
>   }
>   }

I think these comments confuse the issue, and should be removed. In
fact, I think this part of the code can remain unchanged (see below).

> @@ -105,10 +107,10 @@ initTrie(char *filename)
>   while ((line = tsearch_readline(&trst)) != NULL)
>   {
>   /*
> -  * The format of each line must be "src trg" 
> where src and trg
> +  * The format of each line must be "src [trg]" 
> where src and trg
>* are sequences of one or more non-whitespace 
> characters,
>* separated by whitespace.  Whitespace at 
> start or end of
> -  * line is ignored.
> +  * line is ignored. If no trg added, a 
> zero-length string is used.
>*/
>   int state;

I suggest "If trg is empty, a zero-length string is used" for the last
sentence.

> @@ -160,6 +162,13 @@ initTrie(char *filename)
>   }
>   }
>  
> + /* if no trg (loop stops at state 1 or 2), use 
> zero-length target */
> + if (state == 1 || state == 2)
> + {
> + trglen = 0;
> + state = 5;
> + }

If I understand the code correctly, "src" alone will leave state == 1,
and "src " will leave state == 2, and in both cases trg and trglen will
be unchanged (from NULL and 0 respectively).

In that case, I think it would be clearer to do something like this:

char *trg = "";

…

/* It's OK to have a valid src and empty trg. */
if (state > 0 && trglen == 0)
state = 5;

That way, you don't have the NULL pointer, and you don't have to add a
NULL-pointer test in placeChar() above.

What do you think? If you submit a revised patch along these lines, I'll
mark it ready for committer.

Thank you.

-- Abhijit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_resetxlog to clear backup start/end locations.

2014-06-24 Thread Kyotaro HORIGUCHI
Hello, thank you for the suggestion.

I dont' touch what '-n' option shows and rewrite documents for
the option a bit. And '-n' won't show the changes of backup
location.

===
> > There are some changes which haven't been shown by '-n' option,
> > even not displayed at all. I think these should be shown by
> > '-n'. I suppose this is a kind of bug but fixing it seems to be a
> > kind of 'feature change'..
> >
> > Any suggestions?
> 
> This seems the problem of the document and the help message of -n option.
> According to the source code, -n option displays only the values that -e, -l,
> -m -o, -O, and -x options change. The values -f option forcibly changes are
> not be shown in -n option. I'm not sure if this is an oversight in 108e399...

The html(sgml) document says that, 

=== share/doc/html/app-pgresetxlog.html
| The -n (no operation) option instructs pg_resetxlog to print
| the values reconstructed from pg_control and values about to be
| changed, and then exit without modifying anything. This is
| mainly a debugging tool, but can be useful as a sanity check
| before allowing pg_resetxlog to proceed for real.

This seems to have same meaning to the help message. "For
debugging use" also supports your way of understanding the
option, I suppose.

> Anyway, I think that making -n option display all the values that -f option
> changes would be useful. But since that's not a bugfix, we should apply it
> only in HEAD.

Agreed.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Fujii Masao
On Sun, Jun 22, 2014 at 6:54 AM, Vik Fearing  wrote:
> On 06/21/2014 08:23 PM, Kevin Grittner wrote:
>> OK, so I think we want to see a patch based on v1 (FATAL approach)
>> with a change of the name to idle_in_transaction_session_timeout
>> and the units changed to milliseconds.  I don't see why the
>> remoteversion test shouldn't be changed to use 90500 now, too.
>
> The attached patch, rebased to current master, addresses all of these
> issues.

Sorry if this has already been discussed before

Why is IIT timeout turned on only when send_ready_for_query is true?
I was thinking it should be turned on every time a message is received.
Imagine the case where the session is in idle-in-transaction state and
a client gets stuck after sending Parse message and before sending Bind
message. This case would also cause long transaction problem and should
be resolved by IIT timeout. But IIT timeout that this patch adds cannot
handle this case because it's enabled only when send_ready_for_query is
true. Thought?

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cluster name in ps output

2014-06-24 Thread Abhijit Menon-Sen
Hi.

I reviewed the version of this patch without log_line_prefix support,
since that seemed to be generally acceptable in followup discussion.

The patch didn't apply any more because of some changes to guc.c, but it
was trivial to regenerate (fixed patch attached).

> diff --git a/src/backend/utils/misc/postgresql.conf.sample 
> b/src/backend/utils/misc/postgresql.conf.sample
> index 70e5a51..84ae5f3 100644
> --- a/src/backend/utils/misc/postgresql.conf.sample
> +++ b/src/backend/utils/misc/postgresql.conf.sample
> @@ -74,6 +74,8 @@
>   # (change requires restart)
>  #bonjour_name = ''   # defaults to the computer name
>   # (change requires restart)
> +#cluster_name = ''   # defaults to the computer name
> + # (change requires restart)

Cut-and-paste error (there's no default). Also fixed in the attached
patch.

The patch looks OK, and works as advertised (I tested on Linux). If we
want the feature (I like it), this patch is a good enough way to get it.

I'm marking it ready for committer.

-- Abhijit
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 09e6dfc..2426dfe 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -695,6 +695,23 @@ include 'filename'
   
  
 
+ 
+  cluster_name (string)
+  
+   cluster_name configuration parameter
+  
+  
+   
+Sets the cluster name that appears in the process title for all
+processes in this cluster.  No name is shown if this parameter is set
+to the empty string '' (which is the default).  The
+process title is typically viewed by the ps command, or in
+Windows by using the Process Explorer.
+This parameter can only be set at server start.
+   
+  
+ 
+
  
   tcp_keepalives_idle (integer)
   
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 6902c23..c9dbb4c 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -443,6 +443,7 @@ int			temp_file_limit = -1;
 
 int			num_temp_buffers = 1024;
 
+const char *cluster_name = "";
 char	   *data_directory;
 char	   *ConfigFileName;
 char	   *HbaFileName;
@@ -3090,6 +3091,17 @@ static struct config_string ConfigureNamesString[] =
 	},
 
 	{
+		{"cluster_name", PGC_POSTMASTER, CONN_AUTH_SETTINGS,
+			gettext_noop("Sets the name of the cluster that appears in 'ps' output."),
+			NULL,
+			GUC_IS_NAME
+		},
+		&cluster_name,
+		"",
+		NULL, NULL, NULL
+	},
+
+	{
 		/*
 		 * Can't be set by ALTER SYSTEM as it can lead to recursive definition
 		 * of data_directory.
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index d109394..e4e0411 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -74,6 +74,8 @@
 	# (change requires restart)
 #bonjour_name = ''			# defaults to the computer name
 	# (change requires restart)
+#cluster_name = ''			# visible in ps output if set
+	# (change requires restart)
 
 # - Security and Authentication -
 
diff --git a/src/backend/utils/misc/ps_status.c b/src/backend/utils/misc/ps_status.c
index 3aeceae..471d890 100644
--- a/src/backend/utils/misc/ps_status.c
+++ b/src/backend/utils/misc/ps_status.c
@@ -29,6 +29,7 @@
 #include "libpq/libpq.h"
 #include "miscadmin.h"
 #include "utils/ps_status.h"
+#include "utils/guc.h"
 
 extern char **environ;
 bool		update_process_title = true;
@@ -264,15 +265,24 @@ init_ps_display(const char *username, const char *dbname,
 	 * apparently setproctitle() already adds a `progname:' prefix to the ps
 	 * line
 	 */
-	snprintf(ps_buffer, ps_buffer_size,
-			 "%s %s %s ",
-			 username, dbname, host_info);
+#define PROGRAM_NAME_PREFIX ""
 #else
-	snprintf(ps_buffer, ps_buffer_size,
-			 "postgres: %s %s %s ",
-			 username, dbname, host_info);
+#define PROGRAM_NAME_PREFIX "postgres: "
 #endif
 
+	if (*cluster_name == '\0')
+	{
+		snprintf(ps_buffer, ps_buffer_size,
+ PROGRAM_NAME_PREFIX "%s %s %s ",
+ username, dbname, host_info);
+	}
+	else
+	{
+		snprintf(ps_buffer, ps_buffer_size,
+ PROGRAM_NAME_PREFIX "%s %s %s %s ",
+ cluster_name, username, dbname, host_info);
+	}
+
 	ps_buffer_cur_len = ps_buffer_fixed_size = strlen(ps_buffer);
 
 	set_ps_display(initial_str, true);
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 1493d2c..d3cdf68 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -224,6 +224,7 @@ extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
 
+extern const char *cluster_name;
 extern char *data_directory;
 extern char *ConfigFileName;
 extern char *HbaFileName;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] makeAndExpr(), etc. confined to gram.y?

2014-06-24 Thread Tom Lane
Amit Langote  writes:
> Is there a reason why they've been left out of
> makefuncs.h/makefuncs.c? Perhaps they are not supposed to be used
> outside gram.y at all? For example, previously a caller (potentially)
> outside parser could do a makeA_Expr(AEXPR_AND, ...). I guess this is
> no longer possible with AEXPR_AND gone?

What would be the purpose?  There is noplace except gram.y that builds
raw parse trees.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #10728: json_to_recordset with nested json objects NULLs columns

2014-06-24 Thread Tom Lane
Andrew Dunstan  writes:
>>> I currently don't have lots of time to devote to this, sadly, but
>>> Michael's patch looks like a good minimal fix.

> This problem is also manifest in json_populate_recordset, which also 
> uses the function in question, and is in 9.3:

I've pushed this patch back through 9.3, along with a fix to ensure that
json_populate_record destroys the hashtable it creates.  I want to do some
more work on this code, but this much is indubitably a bug fix.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] makeAndExpr(), etc. confined to gram.y?

2014-06-24 Thread Amit Langote
Hi,

A recent commit titled "Avoid recursion when processing simple lists
of AND'ed or OR'ed clauses."
(2146f13408cdb85c738364fe8f7965209e08c6be) got rid of AEXPR_AND, etc.
and instead created makeAndExpr(), etc. in gram.y

Is there a reason why they've been left out of
makefuncs.h/makefuncs.c? Perhaps they are not supposed to be used
outside gram.y at all? For example, previously a caller (potentially)
outside parser could do a makeA_Expr(AEXPR_AND, ...). I guess this is
no longer possible with AEXPR_AND gone?

--
Amit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2014-06-24 Thread Stephen Frost
Abhijit,

* Abhijit Menon-Sen (a...@2ndquadrant.com) wrote:
> At 2014-06-24 14:02:11 -0400, sfr...@snowman.net wrote:
> >
> > Will you (collectively) be working in this direction for 9.5?
> 
> We have some time available to work on it, but not so much that I want
> to write any more code without a clearer idea of what might be accepted
> eventually for inclusion.

You and me both... (see nearby discussion regarding the redesign of
RLS..).  For my part, the nexts steps might be to consider how you'd
migrate what you've provided for configuration into catalog tables and
how we'd address the concerns raised elsewhere regarding catalog access
in cases where we're not in a transaction (or at least addressing those
areas and working out what the logging would do in those situations..).

We'd also end up re-working the code to be called as part of PG core
rather than through hook functions, of course, but I don't think those
changes would be too bad compared to figuring out the other issues.

Additionally, thought towards what the SQL-level syntax would be is
another key point- would the main command be 'ALTER AUDIT'?  What would
the sub-commands of that look like for the DBA/auditor who is tasked
with defining/implementing the auditing for the system?  How would we
include data in a structured, yet flexible way?  (That is to say, the
set of tables and columsn logged could be varied, yet we'd want to see
the actual data logged- perhaps as JSON?).

Looking forward to your thoughts.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2014-06-24 Thread Abhijit Menon-Sen
At 2014-06-24 14:02:11 -0400, sfr...@snowman.net wrote:
>
> Will you (collectively) be working in this direction for 9.5?

We have some time available to work on it, but not so much that I want
to write any more code without a clearer idea of what might be accepted
eventually for inclusion.

-- Abhijit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_resetxlog to clear backup start/end locations.

2014-06-24 Thread Fujii Masao
On Wed, Jun 25, 2014 at 11:13 AM, Kyotaro HORIGUCHI
 wrote:
> Hello,
>
>> Ok, I'm doing modify it to reset backup locations by default and
>> remove the new option '-b' to do that. Since this seems looking
>> to be a bug for the poeple, I'll provide backpatches back
>> to... 8.4?  (Final release of 8.4 is scheduled at July 2014)
>
> I looked closer to pg_resetxlog and found I'm feeling discomfort
> of its behavior.
>
> "pg_resetxlog -n", which is explained in its help that "no
> update, just show what would be done (for testing) ", shows
> current values then values to be changed, like this.
>
> | Current pg_control values:
> |
> | pg_control version number:942
> | Catalog version number:   201406181
> | Database system identifier:   6026883474640211951
> | Latest checkpoint's TimeLineID:   1
> | Latest checkpoint's full_page_writes: on
> | Latest checkpoint's NextXID:  0/1831
> | Latest checkpoint's NextOID:  32772
> | Latest checkpoint's NextMultiXactId:  1
> | Latest checkpoint's NextMultiOffset:  0
> | Latest checkpoint's oldestXID:1800
> | Latest checkpoint's oldestXID's DB:   1
> | Latest checkpoint's oldestActiveXID:  0
> | Latest checkpoint's oldestMultiXid:   1
> | Latest checkpoint's oldestMulti's DB: 1
> | Maximum data alignment:   8
> ...
> | Values to be changed:
> |
> | First log segment after reset:0001000C
>
> But "pg_resetxlog -f" made the change as below,
>
> | *** /tmp/1.txt  2014-06-25 10:49:42.269336739 +0900
> | --- /tmp/2.txt  2014-06-25 10:49:49.780266229 +0900
> | ***
> | *** 5,9 
> | ! pg_control last modified: Wed 25 Jun 2014 10:45:20 AM JST
> | ! Latest checkpoint location:   0/B8DE1A0
> | ! Prior checkpoint location:0/B8DE138
> | ! Latest checkpoint's REDO location:0/B8DE1A0
> | ! Latest checkpoint's REDO WAL file:0001000B
> | --- 5,9 
> | ! pg_control last modified: Wed 25 Jun 2014 10:49:46 AM JST
> | ! Latest checkpoint location:   0/C28
> | ! Prior checkpoint location:0/0
> | ! Latest checkpoint's REDO location:0/C28
> | ! Latest checkpoint's REDO WAL file:0001000C
> | ***
> | *** 22 
> | ! Time of latest checkpoint:Wed 25 Jun 2014 10:45:20 AM JST
> | --- 22 
> | ! Time of latest checkpoint:Wed 25 Jun 2014 10:49:46 AM JST
>
> There are some changes which haven't been shown by '-n' option,
> even not displayed at all. I think these should be shown by
> '-n'. I suppose this is a kind of bug but fixing it seems to be a
> kind of 'feature change'..
>
> Any suggestions?

This seems the problem of the document and the help message of -n option.
According to the source code, -n option displays only the values that -e, -l,
-m -o, -O, and -x options change. The values -f option forcibly changes are
not be shown in -n option. I'm not sure if this is an oversight in 108e399...

Anyway, I think that making -n option display all the values that -f option
changes would be useful. But since that's not a bugfix, we should apply it
only in HEAD.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Robert Haas
On Tue, Jun 24, 2014 at 10:16 PM, John Klos  wrote:
>> Has anyone tried to build PostgreSQL for VAX lately?  If so, did it
>> compile?  Did you have to use --disable-spinlocks to get it to compile? If
>> it did compile, can you actually run it, and does it pass the regression
>> tests and work as expected?  Would you be willing to work with the
>> PostgreSQL to ensure continuing support for this platform, or does that seem
>> not worthwhile for whatever reason?
>
> I've compiled postgresql93-client and postgresql93-server from pkgsrc on a
> VAX running NetBSD 6.1.4. The initial launch didn't like the default stack
> limit:
>
> /etc/rc.d/pgsql start
> Initializing PostgreSQL databases.
> LOG:  invalid value for parameter "max_stack_depth": 100
> DETAIL:  "max_stack_depth" must not exceed 0kB.
> HINT:  Increase the platform's stack depth limit via "ulimit -s" or local
> equivalent.
> FATAL:  failed to initialize max_stack_depth to 100
> child process exited with exit code 1
> initdb: removing data directory "/usr/local/pgsql/data"
> pg_ctl: database system initialization failed
>
> I unlimited and tried again. The pgsql process showed it was using 146
> megabytes of memory while initializing, then got as far as:
>
> /etc/rc.d/pgsql start
> Initializing PostgreSQL databases.

What value did it select for shared_buffers?  How much memory does a
high-end VAX have?  These days, we try to set shared_buffers = 128MB
if the platform will support it, but it's supposed to fall back to
smaller values if that doesn't work.  It will try allocating that much
though, at least for a moment, to see whether it can.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_resetxlog to clear backup start/end locations.

2014-06-24 Thread Kyotaro HORIGUCHI
Hello,

> Ok, I'm doing modify it to reset backup locations by default and
> remove the new option '-b' to do that. Since this seems looking
> to be a bug for the poeple, I'll provide backpatches back
> to... 8.4?  (Final release of 8.4 is scheduled at July 2014)

I looked closer to pg_resetxlog and found I'm feeling discomfort
of its behavior.

"pg_resetxlog -n", which is explained in its help that "no
update, just show what would be done (for testing) ", shows
current values then values to be changed, like this.

| Current pg_control values:
| 
| pg_control version number:942
| Catalog version number:   201406181
| Database system identifier:   6026883474640211951
| Latest checkpoint's TimeLineID:   1
| Latest checkpoint's full_page_writes: on
| Latest checkpoint's NextXID:  0/1831
| Latest checkpoint's NextOID:  32772
| Latest checkpoint's NextMultiXactId:  1
| Latest checkpoint's NextMultiOffset:  0
| Latest checkpoint's oldestXID:1800
| Latest checkpoint's oldestXID's DB:   1
| Latest checkpoint's oldestActiveXID:  0
| Latest checkpoint's oldestMultiXid:   1
| Latest checkpoint's oldestMulti's DB: 1
| Maximum data alignment:   8
...
| Values to be changed:
| 
| First log segment after reset:0001000C

But "pg_resetxlog -f" made the change as below,

| *** /tmp/1.txt  2014-06-25 10:49:42.269336739 +0900
| --- /tmp/2.txt  2014-06-25 10:49:49.780266229 +0900
| ***
| *** 5,9 
| ! pg_control last modified: Wed 25 Jun 2014 10:45:20 AM JST
| ! Latest checkpoint location:   0/B8DE1A0
| ! Prior checkpoint location:0/B8DE138
| ! Latest checkpoint's REDO location:0/B8DE1A0
| ! Latest checkpoint's REDO WAL file:0001000B
| --- 5,9 
| ! pg_control last modified: Wed 25 Jun 2014 10:49:46 AM JST
| ! Latest checkpoint location:   0/C28
| ! Prior checkpoint location:0/0
| ! Latest checkpoint's REDO location:0/C28
| ! Latest checkpoint's REDO WAL file:0001000C
| ***
| *** 22 
| ! Time of latest checkpoint:Wed 25 Jun 2014 10:45:20 AM JST
| --- 22 
| ! Time of latest checkpoint:Wed 25 Jun 2014 10:49:46 AM JST

There are some changes which haven't been shown by '-n' option,
even not displayed at all. I think these should be shown by
'-n'. I suppose this is a kind of bug but fixing it seems to be a
kind of 'feature change'..

Any suggestions?


reagrds,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing join removals for more join types

2014-06-24 Thread Simon Riggs
On 24 June 2014 23:48, Tom Lane  wrote:
> Simon Riggs  writes:
>> Other than that it looks pretty good to commit, so I'll wait a week
>> for other objections then commit.
>
> I'd like to review this before it goes in.  I've been waiting for it to
> get marked "ready for committer" though.

I'll leave it for you then once I'm happy.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing NOT IN to use ANTI joins

2014-06-24 Thread Simon Riggs
On 24 June 2014 23:52, Tom Lane  wrote:
> Simon Riggs  writes:
>> On 24 June 2014 23:44, Tom Lane  wrote:
>>> Simon Riggs  writes:
 Having said that, any join plan that relies upon a constraint will
 still be valid even if we drop a constraint while the plan executes
 because any new writes will not be visible to the executing join plan.
>
>>> mumble ... EvalPlanQual ?
>
>> As long as we are relaxing a constraint, we are OK if an earlier
>> snapshot thinks its dealing with a tighter constraint whereas the new
>> reality is a relaxed constraint.
>
> I guess I should have been more explicit: EvalPlanQual processing could
> see newer versions of tuples that might not satisfy the constraints the
> plan was designed against.  Now, this is true only for the tuple that's
> the target of the UPDATE/DELETE, so it's possible you could prove that
> there's no problem --- but it would take careful analysis of the specific
> semantics of the constraints in question.  I don't believe the argument
> you've made here holds up.

OK, thanks for raising that. You're better at seeing these things than I.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Stephen Frost
Craig,

* Craig Ringer (cr...@2ndquadrant.com) wrote:
> On 06/24/2014 10:30 PM, Alvaro Herrera wrote:
> > I haven't been following this thread, but this bit caught my attention.
> > I'm not sure I agree that OR is always the right policy either.
> > There is a case for a policy that says "forbid these rows to these guys,
> > even if they have read permissions from elsewhere".
> 
> That's generally considered a "DENY" policy, a concept borrowed from ACLs.

Right.

> > If OR is the only
> > way to mix multiple policies there might not be a way to implement this.
> 
> I think that's a "later" myself, but we shouldn't design ourselves into
> a corner where we can't support deny rules either.

Agreed, but I don't want to get so wrapped up in all of this that we end
up with a set of requirements so long that we'll never be able to
accomplish them all in a single release...

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Craig Ringer
On 06/24/2014 10:30 PM, Alvaro Herrera wrote:
> I haven't been following this thread, but this bit caught my attention.
> I'm not sure I agree that OR is always the right policy either.
> There is a case for a policy that says "forbid these rows to these guys,
> even if they have read permissions from elsewhere".

That's generally considered a "DENY" policy, a concept borrowed from ACLs.

You have access to a resource if:

- You have at least one policy that gives you access AND
- You have no policies that deny you access

> If OR is the only
> way to mix multiple policies there might not be a way to implement this.

I think that's a "later" myself, but we shouldn't design ourselves into
a corner where we can't support deny rules either.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] RLS Design

2014-06-24 Thread Stephen Frost
Dean, all,

Changing the subject of this thread (though keeping it threaded) as
we've really moved on to a much broader discussion.

* Dean Rasheed (dean.a.rash...@gmail.com) wrote:
> On 24 June 2014 17:27, Stephen Frost  wrote:
> > Single policy vs Multiple, Overlapping policies vs Multiple, 
> > Non-overlapping policies
> 
> What I was describing upthread was multiple non-overlapping policies.

Ok.

> I disagree that this will be more complicated to use. It's a strict
> superset of the single policy functionality, so if you want to do it
> all using a single policy then you can. But I think that once the ACLs
> reach a certain level of complexity, you probably will want to break
> it up into multiple policies, and I think doing so will make things
> simpler, not more complicated.

If we keep it explicitly to per-role only, with only one policy ever
being applied, then perhaps it would be, but I'm not convinced..

> Taking a specific, simplistic example, suppose you had 2 groups of
> users - some are normal users who should only be able to access their
> own records. For these users, you might have a policy like
> 
>   WHERE person_id = current_user
> 
> which would be highly selective, and probably use an index scan. Then
> there might be another group of users who are managers with access to
> the records of, say, everyone in their department. This might then be
> a more complex qual along the lines of
> 
>   WHERE person_id IN (SELECT ... FROM person_department
>WHERE mgr_id = current_user AND ...)
> 
> which might end up being a hash or merge join, depending on any
> user-supplied quals.

Certainly my experience with such a setup is that it includes at least 4
levels (self, manager, director, officer).  Now, officer you could
perhaps exclude as being simply RLS-exempt but with such a structure I
would think we'd just make that a special kind of policy (and not chew
up those last 4 bits).  As for this example, it's quite naturally done
with a recursive query as it's a tree structure, but if you want to keep
the qual simple and fast, you'd materialize the results of such a query
and simply have:

WHERE EXISTS (SELECT 1 from org_chart
   WHERE current_user = emp_id
 AND person_id = org_chart.id) 

> You _could_ combine those into a single policy, but I think it would
> be much better to have 2 distinct policies, since they're 2 very
> different queries, for different use cases. Normal users would only be
> granted permission to use the normal_user_policy. Managers might be
> granted permission to use either the normal_user_policy or the
> manager_policy (but not both at the same time).

I can't recall a system where managers have to request access to their
manager role.  Having another way of changing the permissions which are
applied to a session (the existing one being 'set role') doesn't strike
me as a great idea either.

> That's a very simplified example. In more realistic situations there
> are likely to be many more classes of users, and trying to enforce all
> the logic in a single WHERE clause is likely to get unmanageable, or
> inefficient if it involves lots of logic hidden away in functions.

Functions and external security systems are exactly the real-world
use-case which users I've talked to are looking for.  All of this
discussion is completely orthogonal to their requirements.  I understand
that there are simpler use-cases than those and we may be able to
provide an approach which performs better for those.

> Allowing multiple, non-overlapping policies allows the problem to be
> broken up into more manageable pieces, which also makes the planner's
> job easier, since only a single, simpler policy is in effect in any
> given query.

Let's try to outline what this would look like then.

Taking your approach, we'd have:

CREATE POLICY p1;
CREATE POLICY p2;

ALTER TABLE t1 SET POLICY p1 TO t1_p1_quals;
ALTER TABLE t1 SET POLICY p2 TO t1_p2_quals;

GRANT SELECT ON TABLE t1 TO role1 USING p1;
GRANT SELECT ON TABLE t1 TO role2 USING p2;

I'm guessing we would need to further support:

GRANT INSERT ON TABLE t1 TO role1 USING p2;

as we've already discussed being able to support per-action (SELECT,
INSERT, UPDATE, DELETE) policies.  I'm not quite sure how to address
that though.

Further, as you mention, users would be able to do:

SET rls_policy = whatever;

and things would appear fine, until they tried to access a table to
which they didn't have that policy for, at which point they'd get an
error.

You mention:

GRANT SELECT (col1, col2), UPDATE (col1) ON t1 TO bob USING policy1;

but, to be clear, there would be no option for policies to be
column-specific, right?  The policy would apply to the whole row and
just the SELECT/UPDATE privileges would be on the specific columns (as
exists today).

From this what I'm gathering is that we'd need catalog tables along
these lines:

rls_policy
  oid, polname name, polowner oid, polnamespace oid, pola

Re: [HACKERS] [BUGS] BUG #10728: json_to_recordset with nested json objects NULLs columns

2014-06-24 Thread Tom Lane
Merlin Moncure  writes:
> On Mon, Jun 23, 2014 at 8:43 PM, Tom Lane  wrote:
>> * Nested json arrays are a bit more problematic.  What I'd ideally like
>> is to spit them out in a form that would be successfully parsable as a SQL
>> array of the appropriate element type.  Unfortunately, I think that that
>> ship has sailed because json_populate_recordset failed to do that in 9.3.
>> What we should probably do is define this the same as the nested object
>> case, ie, we spit it out in *json* array format, meaning you can insert it
>> into a text or json/jsonb field of the result record.  Maybe sometime in
>> the future we can add a json-array-to-SQL-array converter function, but
>> these functions won't do that.

> Not quite following your logic here.  9.3 gave an error for an
> internally nested array:

> postgres=# create type foo as(a int, b int[]);
> postgres=# select * from json_populate_recordset(null::foo, '[{"a": 1,
> "b": [1,2,3]},{"a": 1, "b": [1,2,3]}]');
> ERROR:  cannot call json_populate_recordset on a nested object

Yeah, that's the default behavior, with use_json_as_text false.
However, consider what happens with use_json_as_text true:

regression=# select * from json_populate_recordset(null::foo, '[{"a": 1,
"b": [1,2,3]},{"a": 1, "b": [1,2,3]}]', true);
ERROR:  missing "]" in array dimensions

That case is certainly useless, but suppose somebody had done

regression=# create type foo2 as(a int, b json);
CREATE TYPE
regression=# select * from json_populate_recordset(null::foo2, '[{"a": 1,
"b": [1,2,3]},{"a": 1, "b": [1,2,3]}]', true);
 a |b
---+-
 1 | [1,2,3]
 1 | [1,2,3]
(2 rows)

or even just

regression=# create type foo3 as(a int, b text);
CREATE TYPE
regression=# select * from json_populate_recordset(null::foo3, '[{"a": 1,
"b": [1,2,3]},{"a": 1, "b": [1,2,3]}]', true);
 a |b
---+-
 1 | [1,2,3]
 1 | [1,2,3]
(2 rows)

Since these cases work and do something arguably useful, I doubt we
can break them.

However, I don't see anything wrong with changing the behavior in
cases that currently throw an error, since presumably no application
is depending on them.  Perhaps Andrew's comment about looking at the
target type info yields a way forward, ie, we could output in SQL-array
format if the target is an array, or in JSON-array format if the target
is json.  Multiply-nested cases might be a pain to get right though.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing NOT IN to use ANTI joins

2014-06-24 Thread Tom Lane
Simon Riggs  writes:
> On 24 June 2014 23:44, Tom Lane  wrote:
>> Simon Riggs  writes:
>>> Having said that, any join plan that relies upon a constraint will
>>> still be valid even if we drop a constraint while the plan executes
>>> because any new writes will not be visible to the executing join plan.

>> mumble ... EvalPlanQual ?

> As long as we are relaxing a constraint, we are OK if an earlier
> snapshot thinks its dealing with a tighter constraint whereas the new
> reality is a relaxed constraint.

I guess I should have been more explicit: EvalPlanQual processing could
see newer versions of tuples that might not satisfy the constraints the
plan was designed against.  Now, this is true only for the tuple that's
the target of the UPDATE/DELETE, so it's possible you could prove that
there's no problem --- but it would take careful analysis of the specific
semantics of the constraints in question.  I don't believe the argument
you've made here holds up.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing join removals for more join types

2014-06-24 Thread Tom Lane
Simon Riggs  writes:
> Other than that it looks pretty good to commit, so I'll wait a week
> for other objections then commit.

I'd like to review this before it goes in.  I've been waiting for it to
get marked "ready for committer" though.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing NOT IN to use ANTI joins

2014-06-24 Thread Simon Riggs
On 24 June 2014 23:44, Tom Lane  wrote:
> Simon Riggs  writes:
>> Having said that, any join plan that relies upon a constraint will
>> still be valid even if we drop a constraint while the plan executes
>> because any new writes will not be visible to the executing join plan.
>
> mumble ... EvalPlanQual ?

As long as we are relaxing a constraint, we are OK if an earlier
snapshot thinks its dealing with a tighter constraint whereas the new
reality is a relaxed constraint.

The worst that could happen is we hit an ERROR from a constraint that
was in force at the start of the query, so for consistency we really
should be enforcing the same constraint throughout the lifetime of the
query.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing NOT IN to use ANTI joins

2014-06-24 Thread Tom Lane
Simon Riggs  writes:
> Having said that, any join plan that relies upon a constraint will
> still be valid even if we drop a constraint while the plan executes
> because any new writes will not be visible to the executing join plan.

mumble ... EvalPlanQual ?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing NOT IN to use ANTI joins

2014-06-24 Thread Simon Riggs
On 11 June 2014 17:52, Greg Stark  wrote:
> On Wed, Jun 11, 2014 at 3:26 PM, Tom Lane  wrote:
>> If we didn't have mechanisms like this, we'd have far worse hazards from
>> ALTER TABLE than whether the planner made an incorrect join optimization.
>> Consider ALTER COLUMN TYPE for instance.
>
> Obviously not general cases of ALTER COLUMN TYPE but dropping a NULL
> constraint seems like the kind of change targeted by Simon's "reduce
> lock strength" patch that I'm sure he's still interested in. I think
> that patch, while full of dragons to steer around, is something that
> will keep coming up again and again in the future. It's a huge
> operational risk that even these short exclusive locks can cause a
> huge production outage if they happen to get queued up behind a
> reporting query.

The focus of the lock strength reduction was around actions that lock
the table for extended periods. So it was mostly about adding things.
All the DROP actions are still AccessExclusiveLocks and will be for a
while.

Having said that, any join plan that relies upon a constraint will
still be valid even if we drop a constraint while the plan executes
because any new writes will not be visible to the executing join plan.
If we are relaxing a constraint, then a writable query that still
thinks a constraint exists won't cause a problem - it may error out
when it need not, but that's not so bad as to be worth worrying about.

So I think we can remove a NOT NULL constraint without too much problem.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing NOT IN to use ANTI joins

2014-06-24 Thread Simon Riggs
On 24 June 2014 11:32, David Rowley  wrote:

> So if anyone can point me in the right direction then that would be
> really useful.

Many things can be added simply, but most things can't. It seems we
just don't have that information. If we did, Tom would have done this
already.

> On a more positive or even slightly exciting note I think I've managed to
> devise a way that ANTI JOINS can be used for NOT IN much more often. It
> seems that find_nonnullable_vars will analyse a quals list to find
> expressions that mean that the var cannot be NULL. This means we can perform
> ANTI JOINS for NOT IN with queries like:
>
> SELECT * FROM a WHERE id NOT IN(SELECT nullable_col FROM b WHERE
> nullable_col = 1);
> or
> SELECT * FROM a WHERE id NOT IN(SELECT nullable_col FROM b WHERE
> nullable_col IS NOT NULL);
>
> (The attached patch implements this)
>
> the nullable_col =1 will mean that nullable_col cannot be NULL, so the ANTI
> JOIN can be performed safely. I think this combined with the NOT NULL check
> will cover probably just about all valid uses of NOT IN with a subquery...
> unless of course I've assumed something wrongly about find_nonnullable_vars.
> I just need the correct RangeTblEntry in order to determine if the
> TargetEntry is from an out join.

This is the better way to go. It's much better to have explicit proof
its not null than a possibly long chain of metadata that might be
buggy.

> The attached patch is a broken implemention that still needs the lookup code
> fixed to reference the correct RTE. The failing regression tests show where
> the problems lie.
>
> Any help on this would be really appreciated.

I'd suggest we just drop the targetlist approach completely.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing join removals for more join types

2014-06-24 Thread Simon Riggs
On 23 June 2014 12:06, David Rowley  wrote:

>> It's not clear to me where you get the term "sortclause" from. This is
>> either the groupclause or distinctclause, but in the test cases you
>> provide this shows this has nothing at all to do with sorting since
>> there is neither an order by or a sorted aggregate anywhere near those
>> queries. Can we think of a better name that won't confuse us in the
>> future?
>>
>
> I probably got the word "sort" from the function targetIsInSortList, which
> expects a list of SortGroupClause. I've renamed the function to
> sortlist_is_unique_on_restrictinfo() and renamed the sortclause parameter to
> sortlist. Hopefully will reduce confusion about it being an ORDER BY clause
> a bit more. I think sortgroupclauselist might be just a bit too long. What
> do you think?

OK, perhaps I should be clearer. The word "sort" here seems completely
misplaced and we should be using a more accurately descriptive term.
It's slightly more than editing to rename things like that, so I'd
prefer you cam up with a better name.

Did you comment on the transitive closure question? Should we add a
test for that, whether or not it works yet?

Other than that it looks pretty good to commit, so I'll wait a week
for other objections then commit.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Stephen Frost
Dean,

* Dean Rasheed (dean.a.rash...@gmail.com) wrote:
> Thinking about the examples upthread, a separate issue occurs to me
> --- when defining a RLS qual, I think that there has to be a syntax to
> specify an alias for the main table, so that correlated subqueries can
> refer to it. I'm not sure if that's been mentioned in any of the
> discussions so far, but it might be quite hard to define certain quals
> without it.

Yeah, that thought had occured to me also.  Have any suggestions about
how to approach that issue?  The way triggers have OLD/NEW comes to mind
but I'm not sure how easily that'd work.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Dean Rasheed
Thinking about the examples upthread, a separate issue occurs to me
--- when defining a RLS qual, I think that there has to be a syntax to
specify an alias for the main table, so that correlated subqueries can
refer to it. I'm not sure if that's been mentioned in any of the
discussions so far, but it might be quite hard to define certain quals
without it.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Dean Rasheed
On 24 June 2014 17:27, Stephen Frost  wrote:
> Single policy vs Multiple, Overlapping policies vs Multiple, Non-overlapping 
> policies
>

What I was describing upthread was multiple non-overlapping policies.

I disagree that this will be more complicated to use. It's a strict
superset of the single policy functionality, so if you want to do it
all using a single policy then you can. But I think that once the ACLs
reach a certain level of complexity, you probably will want to break
it up into multiple policies, and I think doing so will make things
simpler, not more complicated.

Taking a specific, simplistic example, suppose you had 2 groups of
users - some are normal users who should only be able to access their
own records. For these users, you might have a policy like

  WHERE person_id = current_user

which would be highly selective, and probably use an index scan. Then
there might be another group of users who are managers with access to
the records of, say, everyone in their department. This might then be
a more complex qual along the lines of

  WHERE person_id IN (SELECT ... FROM person_department
   WHERE mgr_id = current_user AND ...)

which might end up being a hash or merge join, depending on any
user-supplied quals.

You _could_ combine those into a single policy, but I think it would
be much better to have 2 distinct policies, since they're 2 very
different queries, for different use cases. Normal users would only be
granted permission to use the normal_user_policy. Managers might be
granted permission to use either the normal_user_policy or the
manager_policy (but not both at the same time).

That's a very simplified example. In more realistic situations there
are likely to be many more classes of users, and trying to enforce all
the logic in a single WHERE clause is likely to get unmanageable, or
inefficient if it involves lots of logic hidden away in functions.
Allowing multiple, non-overlapping policies allows the problem to be
broken up into more manageable pieces, which also makes the planner's
job easier, since only a single, simpler policy is in effect in any
given query.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Bug in spg_range_quad_inner_consistent for adjacent operator (was Re: [HACKERS] Add a filed to PageHeaderData)

2014-06-24 Thread Heikki Linnakangas

On 06/24/2014 08:48 PM, Pavan Deolasee wrote:

FWIW I can reproduce this on HEAD with the attached patch. I could
reproduce this on a 64-bit Ubuntu as well as 64-bit Mac OSX. Very confusing
it is because I tried with various values for N in char[N] array and it
fails for N=20. Other values I tried are 4, 12, 22, 24 and the test passes
for all of them. The logic for trying other values is to see if pd_linp[]
starting on un-aligned boundary can trigger the issue. But there seem to be
no correlation.

postgres=# select version();

PostgreSQL 9.5devel on x86_64-apple-darwin13.2.0, compiled by Apple LLVM
version 5.1 (clang-503.0.38) (based on LLVM 3.4svn), 64-bit

postgres=# -- test SP-GiST index that's been built incrementally

postgres=# create table test_range_spgist(ir int4range);
postgres=# create index test_range_spgist_idx on test_range_spgist using
spgist (ir);
postgres=# insert into test_range_spgist select int4range(g, g+10) from
generate_series(1,586) g;
INSERT 0 586

postgres=# SET enable_seqscan= t;
postgres=# SET enable_indexscan  = f;
postgres=# SET enable_bitmapscan = f;

postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
 ir
---
[90,100)
[500,510)
(2 rows)

postgres=# SET enable_seqscan= f;
postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
 ir
---
  [90,100)
  [500,510)
(2 rows)

At this point, both rows are visible via index scan as well as seq scan.

postgres=# insert into test_range_spgist select int4range(g, g+10) from
generate_series(587,587) g;
INSERT 0 1

postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
 ir
--
  [90,100)
(1 row)

Ouch. The second row somehow disappeared.

postgres=# SET enable_seqscan= t;
postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
 ir
---
  [90,100)
  [500,510)
(2 rows)

So the last INSERT suddenly makes one row disappear via the index scan
though its still reachable via seq scan. I tried looking at the SP-Gist
code but clearly I don't understand it a whole lot to figure out the issue,
if one exists.


Yeah, I can reproduce this. It doesn't seem to be related to the padding 
or alignment at all. The padding just happens to move tuples around so 
that [500, 510) is picked as an SP-GiST inner node.


The real bug is in spg_range_quad_inner_consistent(), for the adjacent 
operator. Things go wrong when:


The scan key is [100, 500)
The prev centroid is [500, 510)
The current centroid is [544, 554).

The row that should match but isn't returned, [500, 510) is equal to the 
previous centroid. It's in quadrant 3 from the current centroid, but 
spg_range_quad_inner_consistent() incorrectly concludes that it doesn't 
need to scan that quadrant.


The function compares the scan key's upper bound with the the previous 
centroid's lower bound and the current centroid's lower bound:



/*
 * Check if upper bound of argument is not in a
 * quadrant we visited in the previous step.
 */
cmp1 = range_cmp_bounds(typcache, &upper, &prevLower);
cmp2 = range_cmp_bounds(typcache, ¢roidLower,
&prevLower);
if ((cmp2 < 0 && cmp1 > 0) || (cmp2 > 0 && cmp1 < 0))
which2 = 0;


The idea is that if the scan key's upper bound doesn't fall between the 
prev and current centroid's lower bounds, there is no match.


  *   **
 PL   XCL

X = scan key's upper bound: 500)
PL = prev centroid's lower bound [500
CL = current centroid's lower bound [500

This is wrong. X < PL, but it's still nevertheless adjacent to it.

I'll take a closer look tomorrow...

(The "if (which2) ..." block after the code I quoted above also looks 
wrong - it seems to be comparing the argument's lower bound when it 
should be comparing the upper bound according to the comment. )


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How about a proper TEMPORARY TABLESPACE?

2014-06-24 Thread Matheus de Oliveira
On Sun, Jun 22, 2014 at 2:35 AM, Craig Ringer  wrote:

> A way to put UNLOGGED objects in such a space and have them recovered
> if they vanish would also be valuable, IMO.
>
> Not necessarily in the same patch, I'd just rather keep it in mind so
> any chosen design doesn't preclude adding that later.
>

The idea is nice, but I think you should think more about it. Were would we
put the "init" files in this case? It surely can't be in the tablespace.

Best regards,
-- 
Matheus de Oliveira
Analista de Banco de Dados
Dextra Sistemas - MPS.Br nível F!
www.dextra.com.br/postgres


[HACKERS] Keepalive-related socket options under FreeBSD 9, 10

2014-06-24 Thread Piotr Stefaniak
Since upgrading FreeBSD from 8 to 9, I've noticed the following messages
showing up in logs when a connection with pgAdmin3 is made:

LOG:  getsockopt(TCP_KEEPCNT) failed: Protocol not available
STATEMENT:  SELECT setting FROM pg_settings WHERE name IN ('autovacuum',
'track_counts')
LOG:  getsockopt(TCP_KEEPIDLE) failed: Protocol not available
STATEMENT:  SELECT setting FROM pg_settings WHERE name IN ('autovacuum',
'track_counts')
LOG:  getsockopt(TCP_KEEPINTVL) failed: Protocol not available
STATEMENT:  SELECT setting FROM pg_settings WHERE name IN ('autovacuum',
'track_counts')

tcp_keepalives_idle, tcp_keepalives_interval, and tcp_keepalives_count
are all set to the default (0), which means "system default".

My guess as to what causes this:

src/backend/libpq/pqcomm.c apparently assumes that if TCP_KEEPIDLE &
friends are defined, then the respective options are readable, but
according to man tcp, that is not the case for FreeBSD 9 (and 10):

TCP_KEEPINIT
This write-only setsockopt(2) option accepts a per-socket
timeout argument of u_int in seconds, for new, non-estab-
lished TCP connections.  For the global default in mil-
liseconds see keepinit in the MIB Variables section fur-
ther down.

As a work-around, I've set the keepalive options to the system defaults
provided by man tcp.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Alvaro Herrera
Dave McGuire wrote:
> On 06/24/2014 12:42 PM, Tom Lane wrote:

> > I think this means we can write off VAX on NetBSD/OpenBSD as a viable
> > platform for Postgres :-(.  I'm sad to hear it, but certainly have
> > not got the cycles personally to prevent it.
> 
>   Nonono...NetBSD/vax has had shared library support for many years.
> It's only OpenBSD that has that limitation.

So now we know that NetBSD/vax is free of the shared library limitation
that plagues OpenBSD, but does Postgres work on NetBSD/vax otherwise?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Dave McGuire
On 06/24/2014 12:42 PM, Tom Lane wrote:
> "Sebastian Reitenbach"  writes:
>> OK, that was easy:
> 
>> $ cd /usr/ports/databases/postgresql   
>> $ make install
>> ===>  postgresql-client-9.3.4p0  requires shared libraries .
> 
>> OpenBSD VAX is static only, so no postgresql on OpenBSD
>> VAX before shared libraries will ever be made working on it.
> 
> Ouch.  We long ago passed the point of no return as far as requiring
> shared library support: there's too much backend functionality that's
> in separate shared libraries rather than being linked directly into
> the core executable.  I doubt anyone will be interested in taking on
> the task of supporting a parallel all-static build.
> 
> I think this means we can write off VAX on NetBSD/OpenBSD as a viable
> platform for Postgres :-(.  I'm sad to hear it, but certainly have
> not got the cycles personally to prevent it.

  Nonono...NetBSD/vax has had shared library support for many years.
It's only OpenBSD that has that limitation.

-Dave

-- 
Dave McGuire, AK4HZ/3
New Kensington, PA


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Paul Koning

On Jun 24, 2014, at 12:42 PM, Tom Lane  wrote:

> "Sebastian Reitenbach"  writes:
>> OK, that was easy:
> 
>> $ cd /usr/ports/databases/postgresql   
>> $ make install
>> ===>  postgresql-client-9.3.4p0  requires shared libraries .
> 
>> OpenBSD VAX is static only, so no postgresql on OpenBSD
>> VAX before shared libraries will ever be made working on it.
> 
> Ouch.  We long ago passed the point of no return as far as requiring
> shared library support: there's too much backend functionality that's
> in separate shared libraries rather than being linked directly into
> the core executable.  I doubt anyone will be interested in taking on
> the task of supporting a parallel all-static build.
> 
> I think this means we can write off VAX on NetBSD/OpenBSD as a viable
> platform for Postgres :-(.  I'm sad to hear it, but certainly have
> not got the cycles personally to prevent it.

NetBSD and OpenBSD are different systems.  I don’t remember if NetBSD supports 
shared libraries on VAX, but that’s independent of the fact that OpenBSD 
doesn’t.

paul




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Matt Thomas

On Jun 24, 2014, at 9:42 AM, Tom Lane  wrote:

> I think this means we can write off VAX on NetBSD/OpenBSD as a viable
> platform for Postgres :-(.  I'm sad to hear it, but certainly have
> not got the cycles personally to prevent it.

Why?  NetBSD/vax has supported shared libraries for a long long time.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Anders Magnusson

Tom Lane skrev 2014-06-24 18:42:

"Sebastian Reitenbach"  writes:

OK, that was easy:
$ cd /usr/ports/databases/postgresql
$ make install
===>  postgresql-client-9.3.4p0  requires shared libraries .
OpenBSD VAX is static only, so no postgresql on OpenBSD
VAX before shared libraries will ever be made working on it.

Ouch.  We long ago passed the point of no return as far as requiring
shared library support: there's too much backend functionality that's
in separate shared libraries rather than being linked directly into
the core executable.  I doubt anyone will be interested in taking on
the task of supporting a parallel all-static build.

I think this means we can write off VAX on NetBSD/OpenBSD as a viable
platform for Postgres :-(.  I'm sad to hear it, but certainly have
not got the cycles personally to prevent it.


OpenBSD/vax is static only.  NetBSD/vax has dynamic libraries.

-- Ragge


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_receivexlog add synchronous mode

2014-06-24 Thread Fujii Masao
On Tue, Jun 24, 2014 at 3:18 PM,   wrote:
>> I found that this patch breaks --status-interval option of
>> pg_receivexlog when -m option which the patch introduced is supplied.
>> When -m is set, pg_receivexlog tries to send the feedback message as soon
>> as it flushes WAL file even if status interval timeout has not been passed
>> yet. If you want to send the feedback as soon as WAL is written or flushed,
>> like walreceiver does, you need to extend --status-interval option, for
>> example, so that it accepts the value "-1" which means enabling that
>> behavior.
>>
>> Including this change in your original patch would make it more difficult
>> to review. I think that you should implement this as separate patch.
>> Thought?
> As your comments, the current specification to ignore the --status-intarvall.
> It is necessary to respond immediately to synchronize.
>
> It is necessary to think about specifications the --status-intarvall.
> So I revised it to a patch of flushmode which performed flush by a timing 
> same as walreceiver.

I'm not sure if it's good idea to call the feature which you'd like to
add as 'flush mode'.
ISTM that 'flush mode' is vague and confusion for users. Instead, what
about adding
something like --fsync-interval which pg_recvlogical supports?

> A changed part deletes the feedback message after flush, and transmitted the 
> feedback message according to the status interval.
> Change to flushmode from syncmode the mode name, and fixed the document.

+ * Receive a message available from XLOG stream, blocking for
+ * maximum of 'timeout' ms.

The above comment seems incorrect because 'timeout' is boolean argument.

+FD_ZERO(&input_mask);
+FD_SET(PQsocket(conn), &input_mask);
+if (standby_message_timeout)

Why did you get rid of the check of 'still_sending' flag here? Originally the
flag was checked but not in the patch.

+r = rcv_receive(true , ©buf, conn,
standby_message_timeout, last_status, now);

When the return value is -2 (i.e., an error happend), we should go to
the 'error' label.

ISTM that stream_stop() should be called every time a message is
processed. But the
patch changes pg_receivexlog so that it keeps processing the received
data without
calling stream_stop(). This seems incorrect.

'copybuf' needs to be free'd every time new message is received. But you seem to
have forgotten to do that when rcv_receive() with no timeout is called.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Josh Berkus
On 06/24/2014 10:17 AM, Tom Lane wrote:
> Josh Berkus  writes:
>> On 06/23/2014 03:52 PM, Andres Freund wrote:
>>> True.  Which makes me wonder whether we shouldn't default this to
>>> something non-zero -- even if it is 5 or 10 days.
> 
>> I'd go for even shorter: 48 hours.  I'd suggest 24 hours, but that would
>> trip up some users who just need really long pg_dumps.
> 
> FWIW, I do not think we should have a nonzero default for this.
> We could not safely set it to any value that would be small enough
> to be really useful in the field.

48 hours would actually be a useful value; I've dealt multiple times
with newbie users who had a transaction which had been open for a week.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Tom Lane
Dave McGuire  writes:
> On 06/24/2014 12:42 PM, Tom Lane wrote:
>> I think this means we can write off VAX on NetBSD/OpenBSD as a viable
>> platform for Postgres :-(.  I'm sad to hear it, but certainly have
>> not got the cycles personally to prevent it.

>   Nonono...NetBSD/vax has had shared library support for many years.
> It's only OpenBSD that has that limitation.

Ah, thanks for the clarification.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLESPACE MOVE command tag tweak

2014-06-24 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Stephen Frost  writes:
> > That it's more-or-less a bulk 'ALTER TABLE' operation is why I had been
> > trying to think of a way to put it under that command.  What if we had a
> > more general way to reference 'all objects in a tablespace'?
> > "tablespace.*" or "ALL:TABLESAPCE"?  Are there other places which might
> > benefit from being able to take and operate on all objects in a
> > tablespace?
> 
> > Of course, we handle this in 'GRANT' with 'GRANT ON ALL TABLES', so why
> > not 'ALTER TABLE ON ALL TABLES IN TABLESPACE '?  that does get
> > pretty darn verbose but is at least a bit more in-line with what we have
> > done before..
> 
> That's not a bad line of thought --- I doubt that verbosity is critical
> here.

Alright, sounds like this is more-or-less the concensus.  I'll see about
making it happen shortly.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2014-06-24 Thread Stephen Frost
Abhijit,

* Abhijit Menon-Sen (a...@2ndquadrant.com) wrote:
> At 2014-06-23 16:51:55 -0400, sfr...@snowman.net wrote:
> > Are both the connected user and the current role that the command is
> > running under logged?
> 
> Yes, they are. -++

Ok, great, I couldn't remember.  Wish we had that ability in the current
logging code...

> > I'd much rather have that in-core capability and I worry that adding
> > pgaudit as an external feature now would end up preventing us from
> > moving forward in this area for years to come..
> 
> OK. I've marked the patch as rejected in the CF, but of course we hope
> to see further discussion about an in-core implementation for 9.5.

I'm certainly all for it, though I'm not sure if I'll have resources
myself to be able to make it happen this fall..  Will you (collectively)
be working in this direction for 9.5?  That'd certainly be great news
from my quadrant (pun fully intended ;).

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Andres Freund  writes:
> > On 2014-06-24 10:17:49 -0700, Tom Lane wrote:
> >> BTW, has anyone thought about the interaction of this feature with
> >> prepared transactions?  I wonder whether there shouldn't be a similar but
> >> separately-settable maximum time for a transaction to stay in the prepared
> >> state.  If we could set a nonzero default on that, perhaps on the order of
> >> a few minutes, we could solve the ancient bugaboo that "prepared
> >> transactions are too dangerous to enable by default".
> 
> > I'd very much like that feature, but I'm not sure how to implement
> > it. Which process would do that check? We currently only allow rollbacks
> > from the corresponding database...
> > The best idea I have is to do it via autovacuum.
> 
> I did not actually have any plan in mind when I wrote that, but your
> mention of autovacuum suggests an idea for it: consider the code that
> kicks autovacuum off a table when somebody wants exclusive lock.
> In the same way, we could teach processes that want a lock that conflicts
> with a prepared xact that they can kill the prepared xact if it's more
> than X seconds old.
> 
> The other way in which old prepared xacts are dangerous is in blocking
> cleanup of dead tuples, and I agree with your thought that maybe
> autovacuum is the place to deal with that.  I don't know whether we'd
> really need both mechanisms, or if just one would be enough.
> 
> In either case, this wouldn't directly be a timeout but rather a "license
> to kill" once a prepared xact exceeds the threshold and is getting in
> somebody's way.

Why isn't this what we want for idle-in-transaction sessions..?

Sounds like exactly what I'd want, at least.  Don't kill it off unless
it's blocking something or preventing xmin progression...

Indeed, we have specifically implemented a Nagios check which does
exactly this- looks to see if any idle-in-transaction process is
blocking something else and if it's been idle for too long it gets
killed.  We don't have prepared transactions enabled, so we havn't had
to address that.  We do have a check which alerts (but doesn't kill,
yet) idle-in-transaction processes which have been idle for a long time.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Extended Prefetching using Asynchronous IO - proposal and patch

2014-06-24 Thread Heikki Linnakangas

On 06/24/2014 06:08 PM, John Lumby wrote:

The question is, if you receive the notification of the I/O completion
using a signal or a thread, is it safe to release the lwlock from the
signal handler or a separate thread?


In the forthcoming  new version of the patch that uses sigevent,
the originator locks a LWlock associated with that BAaiocb eXclusive,
and ,   when signalled,  in the signal handler it places that LWlock
on a process-local queue of LWlocks awaiting release.
(No, It cannot be safely released inside the signal handler or in a
separate thread). Whenever the mainline passes a CHECK_INTERRUPTS macro
and at a few additional points in bufmgr,  the backend walks this process-local
queue and releases those LWlocks.This is also done if the originator
itself issues a ReadBuffer,  which is the most frequent case in which it
is released.

Meanwhile,  any other backend will simply acquire Shared and release.


Ok, doing the work in CHECK_FOR_INTERRUPTS sounds safe. But is that fast 
enough? We have never made any hard guarantees on how often 
CHECK_FOR_INTERRUPTS() is called. In particular, if you're running 3rd 
party C code or PL code, there might be no CHECK_FOR_INTERRUPTS() calls 
for many seconds, or even more. That's a long time to hold onto a buffer 
I/O lock. I don't think that's acceptable :-(.



I think you are right that the existing io_in_progress_lock LWlock in the
buf_header  could be used for this,  because if there is a aio in progress,
then that lock cannot be in use for synchronous IO.  I chose not to use it
because I preferred to keep the wait/post for asynch io separate,
  but they could both use the same LWlock.   However,   the way the LWlock
is acquired and released would still be a bit different because of the need
to have the originator release it in its mainline.


It would be nice to use the same LWLock.

However, if releasing a regular LWLock in a signal handler is not safe, 
and cannot be made safe, perhaps we should, after all, invent a whole 
new mechanism. One that would make it safe to release the lock in a 
signal handler.



By the way, on the "will it actually work though?" question which several folks
have raised, I should mention that this patch has been in semi-production
use for almost 2 years now in different stages of completion on all postgresql
releases from 9.1.4 to 9.5 devel. I would guess it has had around
500 hours of operation by now. I'm sure there are bugs still to be
found but I am confident it is fundamentally sound.


Well, a committable version of this patch is going to look quite
different from the first version that you posted, so I don't put much
weight on how long you've tested the first version.


Yes,  I am quite willing to change it,  time permitting.
I take the works "committable version" as a positive sign ...


BTW, sorry if I sound negative, I'm actually quite excited about this 
feature. A patch like this take a lot of work, and usually several 
rewrites, until it's ready ;-). But I'm looking forward for it.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extended Prefetching using Asynchronous IO - proposal and patch

2014-06-24 Thread John Lumby
Thanks Heikki,


> Date: Tue, 24 Jun 2014 17:02:38 +0300
> From: hlinnakan...@vmware.com
> To: johnlu...@hotmail.com; st...@mit.edu
> CC: klaussfre...@gmail.com; pgsql-hackers@postgresql.org
> Subject: Re: Extended Prefetching using Asynchronous IO - proposal and patch
>
> On 06/24/2014 04:29 PM, John Lumby wrote:
>>> On Mon, Jun 23, 2014 at 2:43 PM, John Lumby  wrote:
 It is when some *other* backend gets there first with the ReadBuffer that
 things are a bit trickier. The current version of the patch did polling 
 for that case
 but that drew criticism, and so an imminent new version of the patch
 uses the sigevent mechanism. And there are other ways still.
>>>
>>> I'm a bit puzzled by this though. Postgres *already* has code for this
>>> case. When you call ReadBuffer you set the bits on the buffer
>>
>> Good question. Let me explain.
>> Yes, postgresql has code for the case of a backend is inside a synchronous
>> read() or write(), performed from a ReadBuffer(), and some other backend
>> wants that buffer. asynchronous aio is initiated not from ReadBuffer
>> but from PrefetchBuffer, and performs its aio_read into an allocated, pinned,
>> postgresql buffer. This is entirely different from the synchronous io case.
>> Why? Because the issuer of the aio_read (the "originator") is unaware
>> of this buffer pinned on its behalf, and is then free to do any other
>> reading or writing it wishes, such as more prefetching or any other 
>> operation.
>> And furthermore, it may *never* issue a ReadBuffer for the block which it
>> prefetched.
>
> I still don't see the difference. Once an asynchronous read is initiated
> on the buffer, it can't be used for anything else until the read has
> finished. This is exactly the same situation as with a synchronous read:
> after read() is called, the buffer can't be used for anything else until
> the call finishes.

Ah,  now I see what you and Greg are asking.   See my next imbed below.

>
> In particular, consider the situation from another backend's point of
> view. Looking from another backend (i.e. one that didn't initiate the
> read), there's no difference between a synchronous and asynchronous
> read. So why do we need a different IPC mechanism for the synchronous
> and asynchronous cases? We don't.
>
> I understand that *within the backend*, you need to somehow track the
> I/O, and you'll need to treat synchronous and asynchronous I/Os
> differently. But that's all within the same backend, and doesn't need to
> involve the flags or locks in shared memory at all. The inter-process
> communication doesn't need any changes.
>
>>> The problem with using the Buffers I/O in progress bit is that the I/O
>>> might complete while the other backend is busy doing stuff. As long as
>>> you can handle the I/O completion promptly -- either in callback or
>>> thread or signal handler then that wouldn't matter. But I'm not clear
>>> that any of those will work reliably.
>>
>> They both work reliably, but the criticism was that backend B polling
>> an aiocb of an aio issued by backend A is not documented as
>> being supported (although it happens to work), hence the proposed
>> change to use sigevent.
>
> You didn't understand what Greg meant. You need to handle the completion
> of the I/O in the same process that initiated it, by clearing the
> in-progress bit of the buffer and releasing the I/O in-progress lwlock
> on it. And you need to do that very quickly after the I/O has finished,
> because there might be another backend waiting for the buffer and you
> don't want him to wait longer than necessary.

I think I understand the question now.    I didn't spell out the details 
earlier.
Let me explain a little more.
With this patch,     when read is issued,   it is either a synchronous IO 
(as before),  or an asynchronous aio_read (new,   represented by
both BM_IO_IN_PROGRESS    *and*  BM_AIO_IN_PROGRESS).
The way other backends wait on a synchronous IO in progress is unchanged.
But if BM_AIO_IN_PROGRESS,   then *any*  backend which requests
ReadBuffer on this block (including originator) follows a new path
through BufCheckAsync() which,  depending on various flags and context,
send the backend down to FileCompleteaio to check and maybe wait.
So *all* backends who are waiting for a BM_AIO_IN_PROGRESS buffer
will wait in that way. 
  
>
> The question is, if you receive the notification of the I/O completion
> using a signal or a thread, is it safe to release the lwlock from the
> signal handler or a separate thread?

In the forthcoming  new version of the patch that uses sigevent,
the originator locks a LWlock associated with that BAaiocb eXclusive,
and ,   when signalled,  in the signal handler it places that LWlock
on a process-local queue of LWlocks awaiting release.
(No, It cannot be safely released inside the signal handler or in a 
separate thread). Whenever the mainline passes a CHECK_INTERRUPTS macro
and at a

Re: [HACKERS] Add a filed to PageHeaderData

2014-06-24 Thread Pavan Deolasee
On Tue, Jun 24, 2014 at 3:40 PM, Kevin Grittner  wrote:
>
> Soroosh Sardari  wrote:
>
> > I check this problem with a virgin source code of
> > postgresql-9.3.2. So the bug is not for my codes.
>
> > By the way, following code has two different output and it is
> > weird.
>
> I can confirm that I see the difference in 9.3.2, and that I don't
> see the difference in 9.3.4.  Upgrade.
>
> http://www.postgresql.org/support/versioning/
>
> There's really no point in reporting a possible bug on a version
> with known bugs which have already had fixes published.
>

FWIW I can reproduce this on HEAD with the attached patch. I could
reproduce this on a 64-bit Ubuntu as well as 64-bit Mac OSX. Very confusing
it is because I tried with various values for N in char[N] array and it
fails for N=20. Other values I tried are 4, 12, 22, 24 and the test passes
for all of them. The logic for trying other values is to see if pd_linp[]
starting on un-aligned boundary can trigger the issue. But there seem to be
no correlation.

postgres=# select version();

PostgreSQL 9.5devel on x86_64-apple-darwin13.2.0, compiled by Apple LLVM
version 5.1 (clang-503.0.38) (based on LLVM 3.4svn), 64-bit

postgres=# -- test SP-GiST index that's been built incrementally

postgres=# create table test_range_spgist(ir int4range);
postgres=# create index test_range_spgist_idx on test_range_spgist using
spgist (ir);
postgres=# insert into test_range_spgist select int4range(g, g+10) from
generate_series(1,586) g;
INSERT 0 586

postgres=# SET enable_seqscan= t;
postgres=# SET enable_indexscan  = f;
postgres=# SET enable_bitmapscan = f;

postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
ir
---
[90,100)
[500,510)
(2 rows)

postgres=# SET enable_seqscan= f;
postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
ir
---
 [90,100)
 [500,510)
(2 rows)

At this point, both rows are visible via index scan as well as seq scan.

postgres=# insert into test_range_spgist select int4range(g, g+10) from
generate_series(587,587) g;
INSERT 0 1

postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
ir
--
 [90,100)
(1 row)

Ouch. The second row somehow disappeared.

postgres=# SET enable_seqscan= t;
postgres=# select * from test_range_spgist where ir -|- int4range(100,500);
ir
---
 [90,100)
 [500,510)
(2 rows)

So the last INSERT suddenly makes one row disappear via the index scan
though its still reachable via seq scan. I tried looking at the SP-Gist
code but clearly I don't understand it a whole lot to figure out the issue,
if one exists.

Thanks,
Pavan

--
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee


page-header-padding.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] wrapping in extended mode doesn't work well with default pager

2014-06-24 Thread Sergey Muraviov
Hi.

Is there any problem with the patch?


2014-06-17 0:21 GMT+04:00 Greg Stark :

> On Mon, Jun 16, 2014 at 9:05 PM, Robert Haas 
> wrote:
> > So, it seems like we need to do something about this one way or
> > another.  Who's working on that?
>
> So I'm fine finishing what I started. I've just been a bit busy this past
> week.
>
> My inclination is to try to push forward and commit this patch,
> document the changes and make sure we check for any consequences of
> them.
>
> The alternate plan is to revert it for 9.4 and commit the changes to
> 9.5 and that gives us more time to be sure we're ok with them.
>
>
> --
> greg
>



-- 
Best regards,
Sergey Muraviov


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2014-06-24 Thread Stephen Frost
* Fujii Masao (masao.fu...@gmail.com) wrote:
> I'm not sure if this is good idea because this basically means that master
> and every standbys must have the same audit settings and a user cannot
> set what standby logs in standby side. Of course I guess that the audit
> settings in standby would be similar to that in master generally, but I'm
> not sure if that's always true.

The main difference would be that the standby wouldn't be logging
anything about data changing..  but that's to be expected.

Certainly when auditing of select queries and similar actions are done
to satisfy government or industry compliance requirements, it's about
"who reads the data", regardless of where that data is..

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Kevin Grittner
Tom Lane  wrote:
> Josh Berkus  writes:

>>> Which makes me wonder whether we shouldn't default this to
>>> something non-zero -- even if it is 5 or 10 days.
>
>> I'd go for even shorter: 48 hours.  I'd suggest 24 hours, but that
>> would trip up some users who just need really long pg_dumps.
>
> FWIW, I do not think we should have a nonzero default for this.
> We could not safely set it to any value that would be small enough
> to be really useful in the field.

I have seen production environments where users asked for help when
performance had gradually degraded to a fraction of what it was,
due to a connection sitting "idle in transaction" for several
weeks.  Even a timeout of five or ten days would have saved a lot
of pain.  What concerns me on the other side is that I've been
known to start a long-running conversion or data fix on a Friday
and check the results on Monday before committing.  Something like
that might sit for a day or two with little or no concurrent
activity to cause a problem.  It would be a real forehead-slapper
to have forgotten to set a longer timeout before starting the run
on Friday.  A five day timeout seems likely to prevent extreme pain
in the former circumstances while not being likely to mess up ad
hoc bulk activity like the latter.

Of course, if I were managing a cluster and was knowingly and
consciously setting a value, it would probably be more like 5min. 
If I have actually set such a policy I am much less likely to
forget it when it needs to be extended or disabled, and far less
likely to be mad at anyone else if it cancels my work.

> BTW, has anyone thought about the interaction of this feature with
> prepared transactions?  I wonder whether there shouldn't be a similar but
> separately-settable maximum time for a transaction to stay in the prepared
> state.  If we could set a nonzero default on that, perhaps on the order of
> a few minutes, we could solve the ancient bugaboo that "prepared
> transactions are too dangerous to enable by default".

I thought about it enough to mention it briefly.  I haven't taken
it further than to note that it would be a great follow-up patch
once this is in.  I'm not sure that a few minutes would be
sufficient, though.  Theoretically, a crash of the transaction
manager, or one of the other data stores managed by it, or even a
WAN connection to one of the servers, should cause the transaction
manager to finish things up after recovery from the problem.  I
think that a default would need to allow sufficient time for that,
so we can have some confidence that the transaction manager has
actually lost track of it.  If I were configuring this for a real
production environment, I would be in mind of frequently having
seen WAN outages of several hours, and a few which lasted two or
three days.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Tom Lane
Andres Freund  writes:
> On 2014-06-24 10:17:49 -0700, Tom Lane wrote:
>> BTW, has anyone thought about the interaction of this feature with
>> prepared transactions?  I wonder whether there shouldn't be a similar but
>> separately-settable maximum time for a transaction to stay in the prepared
>> state.  If we could set a nonzero default on that, perhaps on the order of
>> a few minutes, we could solve the ancient bugaboo that "prepared
>> transactions are too dangerous to enable by default".

> I'd very much like that feature, but I'm not sure how to implement
> it. Which process would do that check? We currently only allow rollbacks
> from the corresponding database...
> The best idea I have is to do it via autovacuum.

I did not actually have any plan in mind when I wrote that, but your
mention of autovacuum suggests an idea for it: consider the code that
kicks autovacuum off a table when somebody wants exclusive lock.
In the same way, we could teach processes that want a lock that conflicts
with a prepared xact that they can kill the prepared xact if it's more
than X seconds old.

The other way in which old prepared xacts are dangerous is in blocking
cleanup of dead tuples, and I agree with your thought that maybe
autovacuum is the place to deal with that.  I don't know whether we'd
really need both mechanisms, or if just one would be enough.

In either case, this wouldn't directly be a timeout but rather a "license
to kill" once a prepared xact exceeds the threshold and is getting in
somebody's way.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL in Windows console and Ctrl-C

2014-06-24 Thread Noah Misch
On Tue, Jun 24, 2014 at 09:24:43AM +, Christian Ullrich wrote:
> pg_ctl does not pass the option anywhere but on Windows, and postmaster.c 
> does not recognize it anywhere else. If it is encountered on a platform where 
> it does not make sense, it will be treated like any other (unknown) long 
> option.
> 
> This is actually the weakest point of the existing patch, in my opinion. 
> Jamming the long option handling into postmaster.c by way of #ifdef WIN32 
> feels wrong, but I could not figure out a better way to do it.

I liked the proposal here; was there a problem with it?
http://www.postgresql.org/message-id/ca+tgmoz3ake4enctmqmzsykc_0pjl_u4c_x47ge48uy1upb...@mail.gmail.com

The pg_upgrade test suite and the $(prove_check)-based test suites rely on
their pg_ctl-started postmasters receiving any console ^C.  pg_ctl deserves a
--foreground or --no-background option for callers that prefer the current
behavior.  That, or those tests need a new way to launch the postmaster.

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2014-06-24 Thread Fujii Masao
On Mon, Jun 23, 2014 at 9:50 PM, Stephen Frost  wrote:
> * Fujii Masao (masao.fu...@gmail.com) wrote:
>> On Mon, Jun 23, 2014 at 7:51 PM, Abhijit Menon-Sen  
>> wrote:
>> > At 2014-06-23 19:15:39 +0900, masao.fu...@gmail.com wrote:
>> >> You added this into CF, but its patch has not been posted yet. Are you
>> >> planning to make a patch?
>> >
>> > It's a self-contained contrib module. I thought Ian had posted a
>> > tarball, but it looks like he forgot to attach it (or decided to
>> > provide only a Github link). I've attached a tarball here for
>> > your reference.
>
> I'm not a huge fan of adding this as a contrib module unless we can be
> quite sure that there's a path forward from here to a rework of the
> logging in core which would actually support the features pg_audit is
> adding, without a lot of pain and upgrade issues.  Those issues have
> kept other contrib modules from being added to core.
>
> Splitting up contrib into other pieces, one of which is a 'features'
> area, might address that but we'd really need a way to have those pieces
> be able to include/add catalog tables, at least..
>
>> >> If not, it might be better to implement audit feature in core from the
>> >> beginning.
>> >
>> > Sure, we're open to that possibility. Do you have any ideas about what
>> > an in-core implementation should do/look like?
>>
>> I don't have good idea about that. But maybe we can merge pgaudit.log
>> into log_statement for more flexible settings of what to log.
>
> I'd expect a catalog table or perhaps changes to pg_class (maybe other
> things also..) to define what gets logged..

I'm not sure if this is good idea because this basically means that master
and every standbys must have the same audit settings and a user cannot
set what standby logs in standby side. Of course I guess that the audit
settings in standby would be similar to that in master generally, but I'm
not sure if that's always true.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Andres Freund
On 2014-06-24 10:17:49 -0700, Tom Lane wrote:
> BTW, has anyone thought about the interaction of this feature with
> prepared transactions?  I wonder whether there shouldn't be a similar but
> separately-settable maximum time for a transaction to stay in the prepared
> state.  If we could set a nonzero default on that, perhaps on the order of
> a few minutes, we could solve the ancient bugaboo that "prepared
> transactions are too dangerous to enable by default".

I'd very much like that feature, but I'm not sure how to implement
it. Which process would do that check? We currently only allow rollbacks
from the corresponding database...
The best idea I have is to do it via autovacuum.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics hardware support table & supported architectures

2014-06-24 Thread Andres Freund
On 2014-06-24 10:22:08 -0700, Tom Lane wrote:
> Andres Freund  writes:
> > On 2014-06-24 13:03:37 -0400, Noah Misch wrote:
> >> If a change has the potential to make some architectures give wrong
> >> answers only at odd times, that's a different kind of problem.  For
> >> that reason, actively breaking Alpha is a good thing.
> 
> > Not sure what you mean with the 'actively breaking Alpha' statement?
> > That we should drop Alpha?
> 
> +1.  Especially with no buildfarm critter.  Would anyone here care
> to bet even the price of a burger that Alpha isn't broken already?

I'd actually be willing to bet a fair amount of money that it already is
broken. Especially in combination with an aggressively optimizing
compiler.

Then let's do that.

> Even if we *had* an Alpha in the buildfarm, I'd have pretty small
> confidence in whether our code really worked on it.  The buildfarm
> tests just don't stress heavily-concurrent behavior enough.

Yea.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics hardware support table & supported architectures

2014-06-24 Thread Noah Misch
On Tue, Jun 24, 2014 at 07:09:08PM +0200, Andres Freund wrote:
> On 2014-06-24 13:03:37 -0400, Noah Misch wrote:
> > What I'm hearing is that you see two options, (1) personally authoring
> > e.g. sparcv8 code or (2) purging the source tree of sparcv8 code before
> > submitting the patch that would otherwise change it.  I favor middle ground
> > that lets minor platforms pay their own way.  Write your changes with as
> > little effort as you wish toward whether they run on sparcv8.  If they break
> > sparcv8, then either (a) that was okay, or (b) a user will show up with a
> > report and/or patch, and we'll deal with that.
> 
> Sounds sensible to me. But we should document such platforms as not
> being officially supported in that case.

It is usually safe to make the documentation match the facts.

> > If a change has the potential to make some architectures give wrong
> > answers only at odd times, that's a different kind of problem.  For
> > that reason, actively breaking Alpha is a good thing.
> 
> Not sure what you mean with the 'actively breaking Alpha' statement?
> That we should drop Alpha?

Yes:
http://www.postgresql.org/message-id/ca+tgmozhgv_gowyfvcryetihpwnttk1dyea-o3f5+pue3tw...@mail.gmail.com

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Vik Fearing
On 06/24/2014 07:17 PM, Tom Lane wrote:
> BTW, has anyone thought about the interaction of this feature with
> prepared transactions?  I wonder whether there shouldn't be a similar but
> separately-settable maximum time for a transaction to stay in the prepared
> state.  If we could set a nonzero default on that, perhaps on the order of
> a few minutes, we could solve the ancient bugaboo that "prepared
> transactions are too dangerous to enable by default".

I did not think about that, but I could probably cook up a patch for it.
 I don't believe it belongs in this patch, though.
-- 
Vik


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics hardware support table & supported architectures

2014-06-24 Thread Tom Lane
Andres Freund  writes:
> On 2014-06-24 13:03:37 -0400, Noah Misch wrote:
>> If a change has the potential to make some architectures give wrong
>> answers only at odd times, that's a different kind of problem.  For
>> that reason, actively breaking Alpha is a good thing.

> Not sure what you mean with the 'actively breaking Alpha' statement?
> That we should drop Alpha?

+1.  Especially with no buildfarm critter.  Would anyone here care
to bet even the price of a burger that Alpha isn't broken already?

Even if we *had* an Alpha in the buildfarm, I'd have pretty small
confidence in whether our code really worked on it.  The buildfarm
tests just don't stress heavily-concurrent behavior enough.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Tom Lane
Josh Berkus  writes:
> On 06/23/2014 03:52 PM, Andres Freund wrote:
>> True.  Which makes me wonder whether we shouldn't default this to
>> something non-zero -- even if it is 5 or 10 days.

> I'd go for even shorter: 48 hours.  I'd suggest 24 hours, but that would
> trip up some users who just need really long pg_dumps.

FWIW, I do not think we should have a nonzero default for this.
We could not safely set it to any value that would be small enough
to be really useful in the field.

BTW, has anyone thought about the interaction of this feature with
prepared transactions?  I wonder whether there shouldn't be a similar but
separately-settable maximum time for a transaction to stay in the prepared
state.  If we could set a nonzero default on that, perhaps on the order of
a few minutes, we could solve the ancient bugaboo that "prepared
transactions are too dangerous to enable by default".

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics hardware support table & supported architectures

2014-06-24 Thread Andres Freund
On 2014-06-24 13:03:37 -0400, Noah Misch wrote:
> On Mon, Jun 23, 2014 at 05:16:15PM +0200, Andres Freund wrote:
> > On 2014-06-23 10:29:54 -0400, Robert Haas wrote:
> > > Telling people that
> > > they can't have even the most minimal platform support code in
> > > PostgreSQL unless they're willing to contribute and maintain a BF VM
> > > indefinitely is not very friendly.  Of course, the risk of their
> > > platform getting broken is higher if they don't, but that's different
> > > than making it a hard requirement.
> > 
> > I agree that we shouldn't actively try to break stuff. But having to
> > understand & blindly modify unused code is on the other hand of actively
> > breaking platforms. It's actively hindering development.
> 
> What I'm hearing is that you see two options, (1) personally authoring
> e.g. sparcv8 code or (2) purging the source tree of sparcv8 code before
> submitting the patch that would otherwise change it.  I favor middle ground
> that lets minor platforms pay their own way.  Write your changes with as
> little effort as you wish toward whether they run on sparcv8.  If they break
> sparcv8, then either (a) that was okay, or (b) a user will show up with a
> report and/or patch, and we'll deal with that.

Sounds sensible to me. But we should document such platforms as not
being officially supported in that case.

> If a change has the potential to make some architectures give wrong
> answers only at odd times, that's a different kind of problem.  For
> that reason, actively breaking Alpha is a good thing.

Not sure what you mean with the 'actively breaking Alpha' statement?
That we should drop Alpha?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Tom Lane
Josh Berkus  writes:
> On 06/24/2014 07:50 AM, Vik Fearing wrote:
>> Once the remote times out, the local transaction is doomed (and won't
>> even know it until it tries to commit).  If we don't allow the fdw to be
>> special, then the local transaction can't run at all.  Ever.

> I'm unclear on how the FDW could be special.  From the point of the
> remote server, how does it even know that it's receiving an FDW
> connection and not some other kind of connection?

One way you could do it is to use a user id that's only for FDW
connections, and do an ALTER ROLE on that id to set the appropriate
timeout.

Personally I'm violently against having postgres_fdw mess with this
setting; for one thing, the proposed coding would prevent DBAs from
controlling the timeout as they see fit, because it would override
any ALTER ROLE or other remote-side setting.  It doesn't satisfy the
POLA either.  postgres_fdw does not for example override
statement_timeout; why should it override this timeout?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Atomics hardware support table & supported architectures

2014-06-24 Thread Noah Misch
On Mon, Jun 23, 2014 at 05:16:15PM +0200, Andres Freund wrote:
> On 2014-06-23 10:29:54 -0400, Robert Haas wrote:
> > Telling people that
> > they can't have even the most minimal platform support code in
> > PostgreSQL unless they're willing to contribute and maintain a BF VM
> > indefinitely is not very friendly.  Of course, the risk of their
> > platform getting broken is higher if they don't, but that's different
> > than making it a hard requirement.
> 
> I agree that we shouldn't actively try to break stuff. But having to
> understand & blindly modify unused code is on the other hand of actively
> breaking platforms. It's actively hindering development.

What I'm hearing is that you see two options, (1) personally authoring
e.g. sparcv8 code or (2) purging the source tree of sparcv8 code before
submitting the patch that would otherwise change it.  I favor middle ground
that lets minor platforms pay their own way.  Write your changes with as
little effort as you wish toward whether they run on sparcv8.  If they break
sparcv8, then either (a) that was okay, or (b) a user will show up with a
report and/or patch, and we'll deal with that.

For any minor-platform user sighing now, the community offers an unbeatable
deal on PostgreSQL committer time.  Provide a currently-passing buildfarm
member, and no PostgreSQL committer will be content until his new code works
there.  How can you pass that up?

(By "break sparcv8", I mean a build failure, test suite failure, or large
performance regression.  If a change has the potential to make some
architectures give wrong answers only at odd times, that's a different kind of
problem.  For that reason, actively breaking Alpha is a good thing.)

> > But I think this is all a bit off-topic for this thread.  Andres has
> > already implemented a fallback for people who haven't got CAS and
> > fetch-and-add on their platform, so whether or not we deprecate some
> > more platforms has no bearing at all on this patch.
> 
> While I indeed have that fallback code, that's statement is still not
> entirely true. We still need to add atomics support for lots of
> platforms, otherwise they're just going to be 'less supported' than
> now. Are we fine with that and just'll accept patches?

+1

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Josh Berkus
On 06/24/2014 07:50 AM, Vik Fearing wrote:
> On 06/24/2014 04:04 PM, Robert Haas wrote:
>>> If the local transaction is actually idle in transaction and the local
 server doesn't have a timeout, we're no worse off than before this patch.
>>
>> I think we are.  First, the correct timeout is a matter of
>> remote-server-policy, not local-server-policy.  If the remote server
>> wants to boot people with long-running idle transactions, it's
>> entitled to do that, and postgres_fdw shouldn't assume that it's
>> "special".
> 
> So how would the local transaction ever get its work done?  What option
> does it have to tell the remote server that it isn't actually idling, it
> just doesn't need to use the remote connection for a while?
> 
> Once the remote times out, the local transaction is doomed (and won't
> even know it until it tries to commit).  If we don't allow the fdw to be
> special, then the local transaction can't run at all.  Ever.

I'm unclear on how the FDW could be special.  From the point of the
remote server, how does it even know that it's receiving an FDW
connection and not some other kind of connection?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #10728: json_to_recordset with nested json objects NULLs columns

2014-06-24 Thread Merlin Moncure
On Tue, Jun 24, 2014 at 9:08 AM, Andrew Dunstan  wrote:
> w.r.t. json arrays, I think you're chasing a chimera, since they are
> heterogenous, unlike SQL arrays.

But, there are many useful cases where the json is known to be well
formed, right?  Or do you mean that the difficulties stem from simply
validating the type?  Basically, I'm wondering if

SELECT to_json(foo_t[])

is ever going to be able to be reversed by:

SELECT array(json[b]_populate_recordset(null::foo_t[]), '...'::json[b])

...where foo_t is some arbitrarily complex nested type. even simpler
(although not necessarily faster) would be:

SELECT from_json(null::foo_t[], ',,,');

or even

SELECT '...'::foo_t[]::json::foo_t[];

My basic gripe with the json[b] APIs is that there is no convenient
deserialization reverse of to_json. Tom's proposal AIUI, in particular
having internal json arrays force to json, would foreclose the last
two cases from ever being possible.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Pavel Stehule
2014-06-24 18:43 GMT+02:00 Josh Berkus :

> On 06/23/2014 03:52 PM, Andres Freund wrote:
> > On 2014-06-23 13:19:47 -0700, Kevin Grittner wrote:
> > which already seems less clear (because the transaction belongs
> > to idle)
> >>
> >> I have no idea what that means.
> >
> > It's "idle_in_transaction"_session_timeout. Not
> > "idle_in"_transaction_session_timeout.
> >
> > and for another that distinction seems to be to subtle for users.
> >>
> >> The difference between an "idle in transaction session" and an
> >> "idle transaction" is too subtle for someone preparing to terminate
> >> one of those?
> >
> > Yes. To me that's an academic distinction. As a nonnative speaker it
> > looks pretty much random that one has an "in" in it and the other
> > doesn't. Maybe I'm just having a grammar fail, but there doesn't seem to
> > be much sense in it.
>
> As a native speaker, I find the distinction elusive as well.  If someone
> was actually planning to commit transaction cancel, I'd object to it.
>
> And frankly, it doesn't make any sense to have two independent timeouts
> anyway.  Only one of them would ever be invoked, whichever one came
> first.  If you really want to plan for a feature I doubt anyone is going
> to write, the appropriate two GUCs are:
>
> idle_transaction_timeout: ## ms
> idle_transaction_timeout_action: cancel | terminate
>
> However, since I'm not convinced that anyone is ever going to write the
> "cancel" version, can we please just leave the 2nd GUC out for now?
>
> >>> A long idle in transaction state pretty much always indicates a
> >>> problematic interaction with postgres.
> >>
> >> True.  Which makes me wonder whether we shouldn't default this to
> >> something non-zero -- even if it is 5 or 10 days.
>
> I'd go for even shorter: 48 hours.  I'd suggest 24 hours, but that would
> trip up some users who just need really long pg_dumps.
>

long transactions should not be a problem - this should to break
transaction when it does >>nothing<< long time.

Regards

Pavel


>
> --
> Josh Berkus
> PostgreSQL Experts Inc.
> http://pgexperts.com
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Vik Fearing
On 06/24/2014 06:43 PM, Josh Berkus wrote:
 A long idle in transaction state pretty much always indicates a
 >>> problematic interaction with postgres.
>>> >>
>>> >> True.  Which makes me wonder whether we shouldn't default this to
>>> >> something non-zero -- even if it is 5 or 10 days.
>
> I'd go for even shorter: 48 hours.  I'd suggest 24 hours, but that would
> trip up some users who just need really long pg_dumps.

Why would pg_dump be idle for 24 hours?
-- 
Vik


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Autonomous Transaction (WIP)

2014-06-24 Thread Pavel Stehule
Hello

There are lot of unnecessary block over one statement in code

+   if ((inAutoX) && (chunk == events->head) && ((char *)event
< afterTriggers->events_stack[my_level].tailfree))
+   {
+   continue;
+   }
+


and there a few too long lines

Regards

Pavel


2014-06-24 18:40 GMT+02:00 Pavel Stehule :

> postgres=# select version();
>
> version
>
> -
>  PostgreSQL 9.5devel on x86_64-unknown-linux-gnu, compiled by gcc (GCC)
> 4.8.2 20131212 (Red Hat 4.8.2-7), 64-bit
> (1 row)
>
>
>
> 2014-06-24 18:39 GMT+02:00 Pavel Stehule :
>
> Hello
>>
>> regress tests fails:
>>
>>  plancache... ok
>>  limit... ok
>>  plpgsql  ... ok
>>  copy2... ok
>>  temp ... FAILED
>>  domain   ... ok
>>  rangefuncs   ... ok
>>  prepare  ... ok
>>  without_oid  ... ok
>>  conversion   ... ok
>>  truncate ... ok
>>  alter_table  ... ok
>>  sequence ... ok
>>
>> I did some small tests and it works well. When I looked to code, I was
>> surprised by hardcoded max nesting level of autonomous transactions
>>
>> #define MAX_AUTOX_NESTING_LEVEL   3
>>
>> why? Is not it too restrictive?
>>
>> I am missing a regress tests.
>>
>> Regards
>>
>> Pavel
>>
>>
>>
>>
>>
>>
>> 2014-06-18 11:19 GMT+02:00 Rajeev rastogi :
>>
>>> On 17 June 2014 02:01, Alvaro Herrera Wrote:
>>>
>>> > What's the status of this patch?
>>>
>>> I have completed work on this and some more changes are done on top of
>>> earlier patch shared:
>>> 1. Fixed all of the issues observed.
>>> 2. Addressed some of the feedback from community like
>>> a. Change the syntax to
>>> START AUTONOMOUS TRANSACTION [READ ONLY | READ
>>> WRITE]
>>> b. As Pavan had pointed, I have made transaction behavior (only
>>> read-only properties) of main and autonomous transaction independent.
>>> 3. Added documentation for this feature.
>>> 4. Rebased to latest git code.
>>>
>>> Please find the attached latest patch and provide opinion.
>>>
>>> Thanks and Regards,
>>> Kumar Rajeev Rastogi
>>>
>>>
>>>
>>> --
>>> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-hackers
>>>
>>>
>>
>


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Tom Lane
"Sebastian Reitenbach"  writes:
> OK, that was easy:

> $ cd /usr/ports/databases/postgresql   
> $ make install
> ===>  postgresql-client-9.3.4p0  requires shared libraries .

> OpenBSD VAX is static only, so no postgresql on OpenBSD
> VAX before shared libraries will ever be made working on it.

Ouch.  We long ago passed the point of no return as far as requiring
shared library support: there's too much backend functionality that's
in separate shared libraries rather than being linked directly into
the core executable.  I doubt anyone will be interested in taking on
the task of supporting a parallel all-static build.

I think this means we can write off VAX on NetBSD/OpenBSD as a viable
platform for Postgres :-(.  I'm sad to hear it, but certainly have
not got the cycles personally to prevent it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Josh Berkus
On 06/23/2014 03:52 PM, Andres Freund wrote:
> On 2014-06-23 13:19:47 -0700, Kevin Grittner wrote:
> which already seems less clear (because the transaction belongs
> to idle)
>>
>> I have no idea what that means.
> 
> It's "idle_in_transaction"_session_timeout. Not
> "idle_in"_transaction_session_timeout.
> 
> and for another that distinction seems to be to subtle for users.
>>
>> The difference between an "idle in transaction session" and an
>> "idle transaction" is too subtle for someone preparing to terminate
>> one of those?
> 
> Yes. To me that's an academic distinction. As a nonnative speaker it
> looks pretty much random that one has an "in" in it and the other
> doesn't. Maybe I'm just having a grammar fail, but there doesn't seem to
> be much sense in it.

As a native speaker, I find the distinction elusive as well.  If someone
was actually planning to commit transaction cancel, I'd object to it.

And frankly, it doesn't make any sense to have two independent timeouts
anyway.  Only one of them would ever be invoked, whichever one came
first.  If you really want to plan for a feature I doubt anyone is going
to write, the appropriate two GUCs are:

idle_transaction_timeout: ## ms
idle_transaction_timeout_action: cancel | terminate

However, since I'm not convinced that anyone is ever going to write the
"cancel" version, can we please just leave the 2nd GUC out for now?

>>> A long idle in transaction state pretty much always indicates a
>>> problematic interaction with postgres.
>>
>> True.  Which makes me wonder whether we shouldn't default this to
>> something non-zero -- even if it is 5 or 10 days.

I'd go for even shorter: 48 hours.  I'd suggest 24 hours, but that would
trip up some users who just need really long pg_dumps.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Autonomous Transaction (WIP)

2014-06-24 Thread Pavel Stehule
postgres=# select version();

version
-
 PostgreSQL 9.5devel on x86_64-unknown-linux-gnu, compiled by gcc (GCC)
4.8.2 20131212 (Red Hat 4.8.2-7), 64-bit
(1 row)



2014-06-24 18:39 GMT+02:00 Pavel Stehule :

> Hello
>
> regress tests fails:
>
>  plancache... ok
>  limit... ok
>  plpgsql  ... ok
>  copy2... ok
>  temp ... FAILED
>  domain   ... ok
>  rangefuncs   ... ok
>  prepare  ... ok
>  without_oid  ... ok
>  conversion   ... ok
>  truncate ... ok
>  alter_table  ... ok
>  sequence ... ok
>
> I did some small tests and it works well. When I looked to code, I was
> surprised by hardcoded max nesting level of autonomous transactions
>
> #define MAX_AUTOX_NESTING_LEVEL   3
>
> why? Is not it too restrictive?
>
> I am missing a regress tests.
>
> Regards
>
> Pavel
>
>
>
>
>
>
> 2014-06-18 11:19 GMT+02:00 Rajeev rastogi :
>
>> On 17 June 2014 02:01, Alvaro Herrera Wrote:
>>
>> > What's the status of this patch?
>>
>> I have completed work on this and some more changes are done on top of
>> earlier patch shared:
>> 1. Fixed all of the issues observed.
>> 2. Addressed some of the feedback from community like
>> a. Change the syntax to
>> START AUTONOMOUS TRANSACTION [READ ONLY | READ
>> WRITE]
>> b. As Pavan had pointed, I have made transaction behavior (only
>> read-only properties) of main and autonomous transaction independent.
>> 3. Added documentation for this feature.
>> 4. Rebased to latest git code.
>>
>> Please find the attached latest patch and provide opinion.
>>
>> Thanks and Regards,
>> Kumar Rajeev Rastogi
>>
>>
>>
>> --
>> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-hackers
>>
>>
>


temp.out
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Autonomous Transaction (WIP)

2014-06-24 Thread Pavel Stehule
Hello

regress tests fails:

 plancache... ok
 limit... ok
 plpgsql  ... ok
 copy2... ok
 temp ... FAILED
 domain   ... ok
 rangefuncs   ... ok
 prepare  ... ok
 without_oid  ... ok
 conversion   ... ok
 truncate ... ok
 alter_table  ... ok
 sequence ... ok

I did some small tests and it works well. When I looked to code, I was
surprised by hardcoded max nesting level of autonomous transactions

#define MAX_AUTOX_NESTING_LEVEL   3

why? Is not it too restrictive?

I am missing a regress tests.

Regards

Pavel






2014-06-18 11:19 GMT+02:00 Rajeev rastogi :

> On 17 June 2014 02:01, Alvaro Herrera Wrote:
>
> > What's the status of this patch?
>
> I have completed work on this and some more changes are done on top of
> earlier patch shared:
> 1. Fixed all of the issues observed.
> 2. Addressed some of the feedback from community like
> a. Change the syntax to
> START AUTONOMOUS TRANSACTION [READ ONLY | READ
> WRITE]
> b. As Pavan had pointed, I have made transaction behavior (only
> read-only properties) of main and autonomous transaction independent.
> 3. Added documentation for this feature.
> 4. Rebased to latest git code.
>
> Please find the attached latest patch and provide opinion.
>
> Thanks and Regards,
> Kumar Rajeev Rastogi
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


regression.diffs
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Stephen Frost
Robert,

I feel like we are getting to the point of simply talking past each
other and so I'll try anew, and I'll include my understanding of how the
different approaches would address the specific use-case you outlined
up-thread.

Single policy
-
The current implementation approach only allows a single policy to be
included.  The concern raised with this approach is that it won't be
very performant due to the qual complexity, which you outlined
(reformatted a bit) as:

WHERE
  sales_rep_id = (SELECT oid FROM pg_roles
  WHERE rolname = current_user
  AND
  oid IN (SELECT id FROM person WHERE is_sales_rep))
  OR
  partner_id = (SELECT p.org_id
FROM pg_roles a, person p
WHERE a.rolname = current_user
  AND a.oid = p.id)

Which I take to mean there is a 'person' table which looks like:

id, is_sales_rep, org_id

and a table which has the RLS qual which looks like:

pk_id, sales_rep_id, partner_id

Then, if the individual is_sales_rep and it's their account by
sales_rep_id, or if the individual's org_id matches the partner_id, they
can see the record.

Using this example with security barrier views and indexes on person.id,
data.pk_id, data.sales_rep_id, and data.partner_id, we'll get a bitmap
heap scan across the 'data' table by having the two OR's run as
InitPlan 1 and InitPlan 2.

Does that address the concern you had around multi-branch OR policies?
This works with more than two OR branches also, though of course we need
appropriate indexes to make use of a Bitmap Heap Scan.

Even with per-user policies, we would define a policy along these lines,
for the "sfrost" role:

WHERE
  sales_rep_id = 16384
  OR partner_id = 1

Which also ends up doing a Bitmap Heap Scan across the data table.

For the case where a sales rep isn't also a partner, you could simplify
this to:

WHERE
  sales_rep_id = 16384

but I'm not sure that really buys you much?  With the bitmap heap
scan, if one side of the OR ends up not returning anything then it
doesn't contribute to the blocks which have to be scanned.  The index
might still need to be scanned, although I think you could avoid even
that with an EXISTS check to see if the user is a partner at all.
That's not to say that a bitmap scan is equivilant to an index scan, but
it's certainly likely to be far better than a sequential scan.

Now, if the query is "select * from data_view with pk_id = 1002;", then
we get an indexed lookup on the data table based on the PK.  That's what
I was trying to point out previously regarding leakproof functions
(which comprise about half of the boolean functions we provide, if I
recall my previous analysis correctly).  We also get indexed lookups
with "pk_id < 10" or similar as those are also leakproof.

Multiple, Overlapping policies
--
Per discussion, these would generally be OR'd together.

Building up the overall qual which has to include an OR branch for each
individual policy qual(s) looks like a complicated bit of work and one
which might be better left to the user (and, as just pointed out, the
user may actually want AND instead of OR in some cases..).

Managing the plan cache in a sensible way is certainly made more
complicated by this and might mean that it can't be used at all, which
has already been raised as a show-stopper issue.

In the example which you provided, while we could represent that the two
policies exist (sales representatives vs partners) and that they are to
be OR'd together in the catalog, but I don't immediately see how that
would change the qual which ends up being added to the query in this
case or really improving the overall query plan; at least, not without
eliminating one of the OR branches somehow- which I discuss below.

Multiple, Non-overlapping policies
--
Preventing the overlap of policies ends up being very complicated if
many dimensions are allowed.  For the simple case, perhaps only the
'current role' dimension is useful.  I expect that going down that
route would very quickly lead to requests for other dimensions (client
IP, etc) which is why I'm not a big fan of it, but if that's the
concensus then let's work out the syntax and update the patch and move
on.

Another option might be to have a qual for each policy which
the user can define that indicates if that policy is to be applied or
not and then simply pick the first policy for which that qual which
returns 'true'.  We would require an ordering to be defined in this
case, which I believe was an issue up-thread.  If we allow all policies
matching the quals then we run into the complications mentioned under
"Overlapping policies" above.

If we decide that per-role policies need to be supported, I very
quickly see the need to have "groups" of roles to which a policy is to
be applied.  This would differ from roles today as they would not be
allowed to overlap (otherwise we are into over

Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread David G Johnston
On Tue, Jun 24, 2014 at 11:11 AM, Robert Haas [via PostgreSQL] <
ml-node+s1045698n5808915...@n5.nabble.com> wrote:

> On Tue, Jun 24, 2014 at 10:50 AM, Vik Fearing <[hidden email]
> > wrote:
>
> > On 06/24/2014 04:04 PM, Robert Haas wrote:
> >>> If the local transaction is actually idle in transaction and the local
> >>> > server doesn't have a timeout, we're no worse off than before this
> patch.
> >>
> >> I think we are.  First, the correct timeout is a matter of
> >> remote-server-policy, not local-server-policy.  If the remote server
> >> wants to boot people with long-running idle transactions, it's
> >> entitled to do that, and postgres_fdw shouldn't assume that it's
> >> "special".
> >
> > So how would the local transaction ever get its work done?  What option
> > does it have to tell the remote server that it isn't actually idling, it
> > just doesn't need to use the remote connection for a while?
>
> It *is* idling.  You're going to get bloat, and lock contention, and
> so on, just as you would for any other idle session.
>
>
If an application is making use of the foreign server directly then there
is the option to commit after using the foreign server, while saving the
relevant data for the main transaction.  But if you make use of API
functions there can only be a single transaction encompassing both the
local and foreign servers.  But even then, if the user needs a logical
super-transaction across both servers - even though the bulk of the work
occurs locally - that option to commit is then removed regardless of client
usage.

IMO this tool is too blunt to properly allow servers to self-manage
fdw-initiated transactions/sessions; and allowing it to be used is asking
for end-user confusion and frustration.

OTOH, requiring the administrator of the foreign server to issue an ALTER
ROLE fdw_user SET idle_in_transaction_session_timeout = 0; would be fairly
easy to justify.  Allowing them to distinguish between known long-running
and problematic transactions and those that are expected to execute quickly
has value as well.

Ultimately you give the users power and then just need to make sure we
provide sufficient documentation suggestions on how best to configure the
two servers in various typical usage scenarios.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/idle-in-transaction-timeout-tp5805859p5808920.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.

Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Robert Haas
On Tue, Jun 24, 2014 at 10:50 AM, Vik Fearing  wrote:
> On 06/24/2014 04:04 PM, Robert Haas wrote:
>>> If the local transaction is actually idle in transaction and the local
>>> > server doesn't have a timeout, we're no worse off than before this patch.
>>
>> I think we are.  First, the correct timeout is a matter of
>> remote-server-policy, not local-server-policy.  If the remote server
>> wants to boot people with long-running idle transactions, it's
>> entitled to do that, and postgres_fdw shouldn't assume that it's
>> "special".
>
> So how would the local transaction ever get its work done?  What option
> does it have to tell the remote server that it isn't actually idling, it
> just doesn't need to use the remote connection for a while?

It *is* idling.  You're going to get bloat, and lock contention, and
so on, just as you would for any other idle session.

I mean, you could make this assumption about any session: I'm not done
with the transaction yet, e.g. I'm waiting for user input before
deciding what to do next.  That doesn't mean that the DBA doesn't want
to kill it.

> The point of the patch is to allow the DBA to knock off broken clients,
> but this isn't a broken client, it just looks like one.

If it walks like a duck, and quacks like a duck, it's a duck.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Robert Haas
On Tue, Jun 24, 2014 at 10:30 AM, Alvaro Herrera
 wrote:
> Robert Haas wrote:
>> > Right, if we were to support multiple policies on a given table then we
>> > would have to support adding and removing them individually, as well as
>> > specify when they are to be applied- and what if that "when" overlaps?
>> > Do we apply both and only a row which passed them all gets sent to the
>> > user?  Essentially we'd be defining the RLS policies to be AND'd
>> > together, right?  Would we want to support both AND-based and OR-based,
>> > and allow users to pick what set of conditionals they want applied to
>> > their various overlapping RLS policies?
>>
>> AND is not a sensible policy; it would need to be OR.  If you grant
>> someone access to two different subsets of the rows in a table, it
>> stands to reason that they will expect to have access to all of the
>> rows that are in at least one of those subsets.
>
> I haven't been following this thread, but this bit caught my attention.
> I'm not sure I agree that OR is always the right policy either.
> There is a case for a policy that says "forbid these rows to these guys,
> even if they have read permissions from elsewhere".  If OR is the only
> way to mix multiple policies there might not be a way to implement this.
> So ISTM each policy must be able to indicate what to do -- sort of how
> PAM config files allow you to specify "required", "optional" and so
> forth for each module.

Hmm.  Well, that could be useful, but I'm not sure I'd view it as
something we absolutely have to have...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread David Brownlee
Well the latest NetBSD/vax package build doesn't seem to include any
PostgreSQL packages
http://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/vax/6.0_2014Q1/ but I
don't know why.

I'll try a quick (hah :) build this end to see what happens

David



On 24 June 2014 02:12, Robert Haas  wrote:

> On Mon, Jun 23, 2014 at 6:58 PM, Greg Stark  wrote:
> > On Mon, Jun 23, 2014 at 3:09 PM, Robert Haas 
> wrote:
> >> However, we don't know of anyone who has tried to do this in a very
> >> long time, and are therefore considering removing the remaining
> >> support for the VAX platform.  Has anyone tried to build PostgreSQL
> >> for VAX lately?
> >
> > Actually I tried a while ago but got stuck configuring the network on
> > simh so I could get all the tools. I can try again if there's interest
> > but we don't necessarily need to keep a port just because there's a
> > simulator for it.
>
> That's really up to you.  I'm not particularly interested in
> generating interest in maintaining this port if there wouldn't
> otherwise be any; I'm trying to figure out whether there is existing
> interest in it.  For all I know, BSD is shipping PostgreSQL
> binaries for VAX and every other platform they support in each new
> release and people are using them to get real work done.  Then again,
> for all I know, it doesn't even compile on that platform, and if you
> did manage to get it to compile it wouldn't fit on the disk, and if
> you managed to fit it on the disk it wouldn't work because key system
> calls aren't supported.  If someone is still interested in this, I'm
> hoping they'll help us figure out whether it's anywhere close to
> working, and maybe even contribute a buildfarm critter.  If no one
> cares, then let's just rip it out and be done with it.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Sebastian Reitenbach

On Tuesday, June 24, 2014 13:37 CEST, "Sebastian Reitenbach" 
 wrote:

> On Tuesday, June 24, 2014 03:12 CEST, Robert Haas  
> wrote:
>
> > On Mon, Jun 23, 2014 at 6:58 PM, Greg Stark  wrote:
> > > On Mon, Jun 23, 2014 at 3:09 PM, Robert Haas  
> > > wrote:
> > >> However, we don't know of anyone who has tried to do this in a very
> > >> long time, and are therefore considering removing the remaining
> > >> support for the VAX platform.  Has anyone tried to build PostgreSQL
> > >> for VAX lately?
> > >
> > > Actually I tried a while ago but got stuck configuring the network on
> > > simh so I could get all the tools. I can try again if there's interest
> > > but we don't necessarily need to keep a port just because there's a
> > > simulator for it.
> >
> > That's really up to you.  I'm not particularly interested in
> > generating interest in maintaining this port if there wouldn't
> > otherwise be any; I'm trying to figure out whether there is existing
> > interest in it.  For all I know, BSD is shipping PostgreSQL
> > binaries for VAX and every other platform they support in each new
> > release and people are using them to get real work done.  Then again,
> > for all I know, it doesn't even compile on that platform, and if you
> > did manage to get it to compile it wouldn't fit on the disk, and if
> > you managed to fit it on the disk it wouldn't work because key system
> > calls aren't supported.  If someone is still interested in this, I'm
> > hoping they'll help us figure out whether it's anywhere close to
> > working, and maybe even contribute a buildfarm critter.  If no one
> > cares, then let's just rip it out and be done with it.
> >
>
> I'm building the vax packages for openbsd. What I can tell is that
> for 5.5 no postgresql packages were built. But that may be that
> due to the recent upgrade from gcc 2.95 to 3.3.
> I guess that not all dependencies to actually build postgresql
> are available for the vax, or may build successfully there. But I need
> to verify. Might need a few days, since I'm currently on vacation,
> with sparse Internet connectivity. ;)

OK, that was easy:

$ cd /usr/ports/databases/postgresql   
$ make install
===>  postgresql-client-9.3.4p0  requires shared libraries .

OpenBSD VAX is static only, so no postgresql on OpenBSD
VAX before shared libraries will ever be made working on it.

cheers,
Sebastian


>
> Sebastian
>
>
> > --
> > Robert Haas
> > EnterpriseDB: http://www.enterprisedb.com
> > The Enterprise PostgreSQL Company
>







-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Sebastian Reitenbach

On Tuesday, June 24, 2014 03:12 CEST, Robert Haas  wrote:

> On Mon, Jun 23, 2014 at 6:58 PM, Greg Stark  wrote:
> > On Mon, Jun 23, 2014 at 3:09 PM, Robert Haas  wrote:
> >> However, we don't know of anyone who has tried to do this in a very
> >> long time, and are therefore considering removing the remaining
> >> support for the VAX platform.  Has anyone tried to build PostgreSQL
> >> for VAX lately?
> >
> > Actually I tried a while ago but got stuck configuring the network on
> > simh so I could get all the tools. I can try again if there's interest
> > but we don't necessarily need to keep a port just because there's a
> > simulator for it.
>
> That's really up to you.  I'm not particularly interested in
> generating interest in maintaining this port if there wouldn't
> otherwise be any; I'm trying to figure out whether there is existing
> interest in it.  For all I know, BSD is shipping PostgreSQL
> binaries for VAX and every other platform they support in each new
> release and people are using them to get real work done.  Then again,
> for all I know, it doesn't even compile on that platform, and if you
> did manage to get it to compile it wouldn't fit on the disk, and if
> you managed to fit it on the disk it wouldn't work because key system
> calls aren't supported.  If someone is still interested in this, I'm
> hoping they'll help us figure out whether it's anywhere close to
> working, and maybe even contribute a buildfarm critter.  If no one
> cares, then let's just rip it out and be done with it.
>

I'm building the vax packages for openbsd. What I can tell is that
for 5.5 no postgresql packages were built. But that may be that
due to the recent upgrade from gcc 2.95 to 3.3.
I guess that not all dependencies to actually build postgresql
are available for the vax, or may build successfully there. But I need
to verify. Might need a few days, since I'm currently on vacation,
with sparse Internet connectivity. ;)

Sebastian


> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>







-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Dave McGuire
On 06/23/2014 06:58 PM, Greg Stark wrote:
> On Mon, Jun 23, 2014 at 3:09 PM, Robert Haas  wrote:
>> However, we don't know of anyone who has tried to do this in a very
>> long time, and are therefore considering removing the remaining
>> support for the VAX platform.  Has anyone tried to build PostgreSQL
>> for VAX lately?
> 
> Actually I tried a while ago but got stuck configuring the network on
> simh so I could get all the tools. I can try again if there's interest
> but we don't necessarily need to keep a port just because there's a
> simulator for it.

  ...not to mention actual hardware.

   -Dave

-- 
Dave McGuire, AK4HZ/3
New Kensington, PA


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Vik Fearing
On 06/24/2014 04:04 PM, Robert Haas wrote:
>> If the local transaction is actually idle in transaction and the local
>> > server doesn't have a timeout, we're no worse off than before this patch.
>
> I think we are.  First, the correct timeout is a matter of
> remote-server-policy, not local-server-policy.  If the remote server
> wants to boot people with long-running idle transactions, it's
> entitled to do that, and postgres_fdw shouldn't assume that it's
> "special".

So how would the local transaction ever get its work done?  What option
does it have to tell the remote server that it isn't actually idling, it
just doesn't need to use the remote connection for a while?

Once the remote times out, the local transaction is doomed (and won't
even know it until it tries to commit).  If we don't allow the fdw to be
special, then the local transaction can't run at all.  Ever.

The point of the patch is to allow the DBA to knock off broken clients,
but this isn't a broken client, it just looks like one.
-- 
Vik


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] crash with assertions and WAL_DEBUG

2014-06-24 Thread Alvaro Herrera
Heikki Linnakangas wrote:
> On 06/21/2014 01:58 PM, Heikki Linnakangas wrote:
> >It's a bit difficult to attach the mark to the palloc calls, as neither
> >the WAL_DEBUG or LWLOCK_STATS code is calling palloc directly, but
> >marking specific MemoryContexts as sanctioned ought to work. I'll take a
> >stab at that.
> 
> I came up with the attached patch. It adds a function called
> MemoryContextAllowInCriticalSection(), which can be used to exempt
> specific memory contexts from the assertion. The following contexts
> are exempted:

There is a typo in the comment to that function, "This functions can be
used", s/functions/function/

Andres Freund wrote:

> > @@ -1258,6 +1259,25 @@ begin:;
> > if (XLOG_DEBUG)
> > {
> > StringInfoData buf;
> > +   static MemoryContext walDebugCxt = NULL;
> > +   MemoryContext oldCxt;
> > +
> > +   /*
> > +* Allocations within a critical section are normally not 
> > allowed,
> > +* because allocation failure would lead to a PANIC. But this 
> > is just
> > +* debugging code that no-one is going to enable in production, 
> > so we
> > +* don't care. Use a memory context that's exempt from the rule.
> > +*/
> > +   if (walDebugCxt == NULL)
> > +   {
> > +   walDebugCxt = AllocSetContextCreate(TopMemoryContext,
> > +   
> > "WAL Debug",
> > +   
> > ALLOCSET_DEFAULT_MINSIZE,
> > +   
> > ALLOCSET_DEFAULT_INITSIZE,
> > +   
> > ALLOCSET_DEFAULT_MAXSIZE);
> > +   MemoryContextAllowInCriticalSection(walDebugCxt, true);
> > +   }
> > +   oldCxt = MemoryContextSwitchTo(walDebugCxt);
> 
> This will only work though if the first XLogInsert() isn't called from a
> critical section. I'm not sure it's a good idea to rely on that.

Ah, true -- AllocSetContextCreate cannot be called from within a
critical section.

> > diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c
> > index 3c1c81a..4264373 100644
> > --- a/src/backend/storage/smgr/md.c
> > +++ b/src/backend/storage/smgr/md.c
> > @@ -219,6 +219,16 @@ mdinit(void)
> >   
> > &hash_ctl,
> >HASH_ELEM | 
> > HASH_FUNCTION | HASH_CONTEXT);
> > pendingUnlinks = NIL;
> > +
> > +   /*
> > +* XXX: The checkpointer needs to add entries to the pending ops
> > +* table when absorbing fsync requests. That is done within a 
> > critical
> > +* section. It means that there's a theoretical possibility 
> > that you
> > +* run out of memory while absorbing fsync requests, which 
> > leads to
> > +* a PANIC. Fortunately the hash table is small so that's 
> > unlikely to
> > +* happen in practice.
> > +*/
> > +   MemoryContextAllowInCriticalSection(MdCxt, true);
> > }
> >  }
> 
> Isn't that allowing a bit too much? We e.g. shouldn't allow
> _fdvec_alloc() within a crritical section. Might make sense to create a
> child context for it.

I agree.

Rahila Syed wrote:

> The patch on compilation gives following error,
> 
> mcxt.c: In function ‘MemoryContextAllowInCriticalSection’:
> mcxt.c:322: error: ‘struct MemoryContextData’ has no member named
> ‘allowInCriticalSection’
> 
> The member in MemoryContextData is defined as 'allowInCritSection' while
> the MemoryContextAllowInCriticalSection accesses the field as
> 'context->allowInCriticalSection'.

It appears Heikki did a search'n replace for "->allowInCritSection"
before submitting, which failed to match the struct declaration.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread David G Johnston
On Tue, Jun 24, 2014 at 10:05 AM, Robert Haas [via PostgreSQL] <
ml-node+s1045698n580889...@n5.nabble.com> wrote:

> On Tue, Jun 24, 2014 at 9:18 AM, Vik Fearing <[hidden email]
> > wrote:
>
> > On 06/22/2014 05:11 PM, Kevin Grittner wrote:
> >> I found one substantive issue that had been missed in discussion,
> >> though.  The patch modifies the postgres_fdw extension to make it
> >> automatically exempt from an attempt to set a limit like this on
> >> the server to which it connects.  I'm not sure that's a good idea.
> >> Why should this type of connection be allowed to sit indefinitely
> >> with an idle open transaction?  I'm inclined to omit this part of
> >> the patch
> >
> > My reasoning for doing it the way I did is that if a transaction touches
> > a foreign table and then goes bumbling along with other things the
> > transaction is active but the connection to the remote server remains
> > idle in transaction.  If it hits the timeout, when the local transaction
> > goes to commit it errors out and you lose all your work.
> >
> > If the local transaction is actually idle in transaction and the local
> > server doesn't have a timeout, we're no worse off than before this
> patch.
>
> I think we are.  First, the correct timeout is a matter of
> remote-server-policy, not local-server-policy.  If the remote server
> wants to boot people with long-running idle transactions, it's
> entitled to do that, and postgres_fdw shouldn't assume that it's
> "special".  The local server policy may be different, and may not even
> have been configured by the same person.  Second, setting another GUC
> at every session start adds overhead for all users of postgres_fdw.
>
> Now, it might be that postgres_fdw should have a facility to allow
> arbitrary options to be set on the foreign side at each connection
> startup.  Then that could be used here if someone wants this behavior.
> But I don't think we should hard-code it, because it could also be NOT
> what someone wants.
>
>
The missing ability is that while the user only cares about the one logical
session we are dealing with two physical sessions in a parent-child
relationship where the child session state does not match that of its
parent.  For me, this whole line of thought is based upon the logical
"idle_in_transaction" - did the application really mean to leave this
hanging?

Say that 90% of the time disabling the timeout will be the correct course
of action; making the user do this explicitly does not seem reasonable.
 And if "doesn't matter" is the current state when the foreign server is
configured no setting will be passed.  Then if the remote server does
institute a timeout all the relevant configurations will need to be changed.

ISTM that the additional overhead in this case would be very small in
percentage terms; at least enough so that usability would be my default
choice.

I have no problem allowing for user-specified behavior but the default of
disabling the timeout seems reasonable.  I am doubting that actually
synchronizing the parent and child sessions, so that the child reports the
same status as the parent, is a valid option - though it would be the
"best" solution since the child would only report IIT if the parent was IIT.

For me, a meaningful default and usability are trumping the unknown
performance degradation.  I can go either way on allowing the local
definition to specify its own non-zero timeout but it probably isn't worth
the effort.  The foreign server administrator ultimately will have to be
aware of which users are connecting via FDW and address his "long-running
transaction" concerns in a more nuanced way than this parameter allows.  In
effect this becomes an 80% solution because it is not (all that) useful on
the remote end of a FDW connection; though at least the local end can make
proper use of it to protect both servers.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/idle-in-transaction-timeout-tp5805859p5808905.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.

Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Alvaro Herrera
Robert Haas wrote:

> > Right, if we were to support multiple policies on a given table then we
> > would have to support adding and removing them individually, as well as
> > specify when they are to be applied- and what if that "when" overlaps?
> > Do we apply both and only a row which passed them all gets sent to the
> > user?  Essentially we'd be defining the RLS policies to be AND'd
> > together, right?  Would we want to support both AND-based and OR-based,
> > and allow users to pick what set of conditionals they want applied to
> > their various overlapping RLS policies?
> 
> AND is not a sensible policy; it would need to be OR.  If you grant
> someone access to two different subsets of the rows in a table, it
> stands to reason that they will expect to have access to all of the
> rows that are in at least one of those subsets.

I haven't been following this thread, but this bit caught my attention.
I'm not sure I agree that OR is always the right policy either.
There is a case for a policy that says "forbid these rows to these guys,
even if they have read permissions from elsewhere".  If OR is the only
way to mix multiple policies there might not be a way to implement this.
So ISTM each policy must be able to indicate what to do -- sort of how
PAM config files allow you to specify "required", "optional" and so
forth for each module.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Andres Freund
On 2014-06-24 10:04:03 -0400, Robert Haas wrote:
> On Tue, Jun 24, 2014 at 9:18 AM, Vik Fearing  wrote:
> > My reasoning for doing it the way I did is that if a transaction touches
> > a foreign table and then goes bumbling along with other things the
> > transaction is active but the connection to the remote server remains
> > idle in transaction.  If it hits the timeout, when the local transaction
> > goes to commit it errors out and you lose all your work.
> >
> > If the local transaction is actually idle in transaction and the local
> > server doesn't have a timeout, we're no worse off than before this patch.
> 
> I think we are.  First, the correct timeout is a matter of
> remote-server-policy, not local-server-policy.  If the remote server
> wants to boot people with long-running idle transactions, it's
> entitled to do that, and postgres_fdw shouldn't assume that it's
> "special".  The local server policy may be different, and may not even
> have been configured by the same person.  Second, setting another GUC
> at every session start adds overhead for all users of postgres_fdw.

+1

> Now, it might be that postgres_fdw should have a facility to allow
> arbitrary options to be set on the foreign side at each connection
> startup.  Then that could be used here if someone wants this behavior.
> But I don't think we should hard-code it, because it could also be NOT
> what someone wants.

I think options=-c idle_in_transaction_timeout=0 in the server config
should already do the trick.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] crash with assertions and WAL_DEBUG

2014-06-24 Thread Rahila Syed
Hello,

The patch on compilation gives following error,

mcxt.c: In function ‘MemoryContextAllowInCriticalSection’:
mcxt.c:322: error: ‘struct MemoryContextData’ has no member named
‘allowInCriticalSection’

The member in MemoryContextData is defined as 'allowInCritSection' while
the MemoryContextAllowInCriticalSection accesses the field as
'context->allowInCriticalSection'.


Thank you,





On Mon, Jun 23, 2014 at 3:28 PM, Heikki Linnakangas  wrote:

> On 06/21/2014 01:58 PM, Heikki Linnakangas wrote:
>
>> It's a bit difficult to attach the mark to the palloc calls, as neither
>> the WAL_DEBUG or LWLOCK_STATS code is calling palloc directly, but
>> marking specific MemoryContexts as sanctioned ought to work. I'll take a
>> stab at that.
>>
>
> I came up with the attached patch. It adds a function called
> MemoryContextAllowInCriticalSection(), which can be used to exempt
> specific memory contexts from the assertion. The following contexts are
> exempted:
>
> * ErrorContext
> * MdCxt, which is used in checkpointer to absorb fsync requests. (the
> checkpointer process as a whole is no longer exempt)
> * The temporary StringInfos used in WAL_DEBUG (a new memory "WAL Debug"
> context is now created for them)
> * LWLock stats hash table (a new "LWLock stats" context is created for it)
>
> Barring objections, I'll commit this to master, and remove the assertion
> from REL9_4_STABLE.
>
> - Heikki
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


Re: [HACKERS] [BUGS] BUG #10728: json_to_recordset with nested json objects NULLs columns

2014-06-24 Thread Andrew Dunstan


On 06/23/2014 09:43 PM, Tom Lane wrote:

Andrew Dunstan  writes:

On 06/23/2014 07:34 PM, Tom Lane wrote:

I'm not following your comment about 9.3.  The json[b]_to_record[set]
functions are new in 9.4, which is what makes me feel it's not too
late to redefine their behavior.  But changing behavior of stuff that
was in 9.3 seems a lot more debatable.

This problem is also manifest in json_populate_recordset, which also
uses the function in question, and is in 9.3:

Ah, I see the problem.

Here is a first cut suggestion:

* Get rid of the use_json_as_text flag argument for the new functions.
In json_populate_record(set), ignore its value and deprecate using it.
(The fact that it already had a default makes that easier.)  The
behavior should always be as below.

* For nested json objects, we'll spit those out in json textual format,
which means they'll successfully convert to either text or json/jsonb.
Compared to the old behavior of json_populate_recordset, this just means
that we don't throw an error anymore regardless of the flag value,
which seems ok (though maybe not something to backpatch into 9.3).

* Nested json arrays are a bit more problematic.  What I'd ideally like
is to spit them out in a form that would be successfully parsable as a SQL
array of the appropriate element type.  Unfortunately, I think that that
ship has sailed because json_populate_recordset failed to do that in 9.3.
What we should probably do is define this the same as the nested object
case, ie, we spit it out in *json* array format, meaning you can insert it
into a text or json/jsonb field of the result record.  Maybe sometime in
the future we can add a json-array-to-SQL-array converter function, but
these functions won't do that.

>From a user's standpoint this just boils down to (a) fix the bug with
mishandling of the hash tables, and (b) get rid of the gratuitous
error report.





The big problem is that we have been ignoring the result type when 
constructing the hash, even though the info is available. There is some 
sense in this in that the field might not even be present in the result 
type. And it works except for structured types like records, arrays and 
json. Even if we don't have a nested value, the functions will do the 
wrong thing for a scalar string destined for a json field (it will be 
de-escaped, when it should not be).


w.r.t. json arrays, I think you're chasing a chimera, since they are 
heterogenous, unlike SQL arrays.


w.r.t. the use_json_as_text argument, yes, it has a default, but the 
default is false. Ignoring it seems to be more than just deprecating it. 
I agree it's a mess, though :-(



cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Robert Haas
On Tue, Jun 24, 2014 at 9:18 AM, Vik Fearing  wrote:
> On 06/22/2014 05:11 PM, Kevin Grittner wrote:
>> I found one substantive issue that had been missed in discussion,
>> though.  The patch modifies the postgres_fdw extension to make it
>> automatically exempt from an attempt to set a limit like this on
>> the server to which it connects.  I'm not sure that's a good idea.
>> Why should this type of connection be allowed to sit indefinitely
>> with an idle open transaction?  I'm inclined to omit this part of
>> the patch
>
> My reasoning for doing it the way I did is that if a transaction touches
> a foreign table and then goes bumbling along with other things the
> transaction is active but the connection to the remote server remains
> idle in transaction.  If it hits the timeout, when the local transaction
> goes to commit it errors out and you lose all your work.
>
> If the local transaction is actually idle in transaction and the local
> server doesn't have a timeout, we're no worse off than before this patch.

I think we are.  First, the correct timeout is a matter of
remote-server-policy, not local-server-policy.  If the remote server
wants to boot people with long-running idle transactions, it's
entitled to do that, and postgres_fdw shouldn't assume that it's
"special".  The local server policy may be different, and may not even
have been configured by the same person.  Second, setting another GUC
at every session start adds overhead for all users of postgres_fdw.

Now, it might be that postgres_fdw should have a facility to allow
arbitrary options to be set on the foreign side at each connection
startup.  Then that could be used here if someone wants this behavior.
But I don't think we should hard-code it, because it could also be NOT
what someone wants.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extended Prefetching using Asynchronous IO - proposal and patch

2014-06-24 Thread Heikki Linnakangas

On 06/24/2014 04:29 PM, John Lumby wrote:

On Mon, Jun 23, 2014 at 2:43 PM, John Lumby  wrote:

It is when some *other* backend gets there first with the ReadBuffer that
things are a bit trickier. The current version of the patch did polling for 
that case
but that drew criticism, and so an imminent new version of the patch
uses the sigevent mechanism. And there are other ways still.


I'm a bit puzzled by this though. Postgres *already* has code for this
case. When you call ReadBuffer you set the bits on the buffer


Good question. Let me explain.
Yes, postgresql has code for the case of a backend is inside a synchronous
read() or write(),  performed from a ReadBuffer(),  and some other backend
wants that buffer.asynchronous aio is initiated not from ReadBuffer
but from PrefetchBuffer,and performs its aio_read into an allocated,  
pinned,
postgresql buffer.This is entirely different from the synchronous io case.
Why?  Because the issuer of the aio_read (the "originator") is unaware
of this buffer pinned on its behalf,  and is then free to do any other
reading or writing it wishes,   such as more prefetching  or any other 
operation.
And furthermore,  it may *never* issue a ReadBuffer for the block which it
prefetched.


I still don't see the difference. Once an asynchronous read is initiated 
on the buffer, it can't be used for anything else until the read has 
finished. This is exactly the same situation as with a synchronous read: 
after read() is called, the buffer can't be used for anything else until 
the call finishes.


In particular, consider the situation from another backend's point of 
view. Looking from another backend (i.e. one that didn't initiate the 
read), there's no difference between a synchronous and asynchronous 
read. So why do we need a different IPC mechanism for the synchronous 
and asynchronous cases? We don't.


I understand that *within the backend*, you need to somehow track the 
I/O, and you'll need to treat synchronous and asynchronous I/Os 
differently. But that's all within the same backend, and doesn't need to 
involve the flags or locks in shared memory at all. The inter-process 
communication doesn't need any changes.



The problem with using the Buffers I/O in progress bit is that the I/O
might complete while the other backend is busy doing stuff. As long as
you can handle the I/O completion promptly -- either in callback or
thread or signal handler then that wouldn't matter. But I'm not clear
that any of those will work reliably.


They both work reliably,  but the criticism was that backend B polling
an aiocb of an aio issued by backend A is not documented as
being supported  (although it happens to work),  hence the proposed
change to use sigevent.


You didn't understand what Greg meant. You need to handle the completion 
of the I/O in the same process that initiated it, by clearing the 
in-progress bit of the buffer and releasing the I/O in-progress lwlock 
on it. And you need to do that very quickly after the I/O has finished, 
because there might be another backend waiting for the buffer and you 
don't want him to wait longer than necessary.


The question is, if you receive the notification of the I/O completion 
using a signal or a thread, is it safe to release the lwlock from the 
signal handler or a separate thread?



By the way,   on the "will it actually work though?" question which several 
folks
have raised,I should mention that this patch has been in semi-production
use for almost 2 years now in different stages of completion on all postgresql
releases from 9.1.4 to 9.5 devel.   I would guess it has had around
500 hours of operation by now. I'm sure there are bugs still to be
found but I am confident it is fundamentally sound.


Well, a committable version of this patch is going to look quite 
different from the first version that you posted, so I don't put much 
weight on how long you've tested the first version.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Vik Fearing
On 06/24/2014 03:29 PM, David G Johnston wrote:
> On Tue, Jun 24, 2014 at 9:20 AM, Vik Fearing [via PostgreSQL] <[hidden
> email] >wrote:
> 
> On 06/22/2014 05:11 PM, Kevin Grittner wrote:
> > I found one substantive issue that had been missed in discussion,
> > though.  The patch modifies the postgres_fdw extension to make it
> > automatically exempt from an attempt to set a limit like this on
> > the server to which it connects.  I'm not sure that's a good idea.
> > Why should this type of connection be allowed to sit indefinitely
> > with an idle open transaction?  I'm inclined to omit this part of
> > the patch
> 
> My reasoning for doing it the way I did is that if a transaction
> touches
> a foreign table and then goes bumbling along with other things the
> transaction is active but the connection to the remote server remains
> idle in transaction.  If it hits the timeout, when the local
> transaction
> goes to commit it errors out and you lose all your work.
> 
> If the local transaction is actually idle in transaction and the local
> server doesn't have a timeout, we're no worse off than before this
> patch. 
> 
> 
> ​Going off of this reading alone wouldn't we have to allow the client to
> set the timeout on the fdw_server - to zero - to ensure reasonable
> operation?

That's what the patch currently does.

> If the client has a process that requires ​10 minutes to
> complete, and the foreign server has a default 5 minute timeout, if the
> client does not disable the timeout on the server wouldn't the foreign
> server always cause the process to abort?

Yes.
-- 
Vik


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Kevin Grittner
David G Johnston  wrote:
> Vik Fearing [via PostgreSQL] <[hidden email]>wrote:
>> On 06/22/2014 05:11 PM, Kevin Grittner wrote:
>>> I found one substantive issue that had been missed in discussion,
>>> though.  The patch modifies the postgres_fdw extension to make it
>>> automatically exempt from an attempt to set a limit like this on
>>> the server to which it connects.  I'm not sure that's a good idea.
>>> Why should this type of connection be allowed to sit indefinitely
>>> with an idle open transaction?  I'm inclined to omit this part of
>>> the patch
>>
>> My reasoning for doing it the way I did is that if a transaction touches
>> a foreign table and then goes bumbling along with other things the
>> transaction is active but the connection to the remote server remains
>> idle in transaction.  If it hits the timeout, when the local transaction
>> goes to commit it errors out and you lose all your work.
>>
>> If the local transaction is actually idle in transaction and the local
>> server doesn't have a timeout, we're no worse off than before this patch. 
>>
>
>
> ​Going off of this reading alone wouldn't we have to allow the
> client to set the timeout on the fdw_server - to zero - to ensure
> reasonable operation?  If the client has a process that requires
​> 10 minutes to complete, and the foreign server has a default 5
> minute timeout, if the client does not disable the timeout on the
> server wouldn't the foreign server always cause the process to
> abort?

That's what Vik did in his patch, and what I was questioning.  I
think he might be right, but I want to think about it.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extended Prefetching using Asynchronous IO - proposal and patch

2014-06-24 Thread John Lumby



> From: st...@mit.edu
> Date: Mon, 23 Jun 2014 16:04:50 -0700
> Subject: Re: Extended Prefetching using Asynchronous IO - proposal and patch
> To: johnlu...@hotmail.com
> CC: klaussfre...@gmail.com; hlinnakan...@vmware.com; 
> pgsql-hackers@postgresql.org
>
> On Mon, Jun 23, 2014 at 2:43 PM, John Lumby  wrote:
>> It is when some *other* backend gets there first with the ReadBuffer that
>> things are a bit trickier. The current version of the patch did polling for 
>> that case
>> but that drew criticism, and so an imminent new version of the patch
>> uses the sigevent mechanism. And there are other ways still.
>
> I'm a bit puzzled by this though. Postgres *already* has code for this
> case. When you call ReadBuffer you set the bits on the buffer

Good question. Let me explain.
Yes, postgresql has code for the case of a backend is inside a synchronous
read() or write(),  performed from a ReadBuffer(),  and some other backend
wants that buffer.    asynchronous aio is initiated not from ReadBuffer
but from PrefetchBuffer,    and performs its aio_read into an allocated,  
pinned,
postgresql buffer.    This is entirely different from the synchronous io case.
Why?  Because the issuer of the aio_read (the "originator") is unaware
of this buffer pinned on its behalf,  and is then free to do any other 
reading or writing it wishes,   such as more prefetching  or any other 
operation.
And furthermore,  it may *never* issue a ReadBuffer for the block which it
prefetched.

Therefore,  asynchronous IO is different from synchronous IO,  and
a new bit,  BM_AIO_IN_PROGRESS, in the buf_header  is required to 
track this aio operation until completion.

I would encourage you to read the new 
postgresql-prefetching-asyncio.README
in the patch file where this is explained in greater detail.

> indicating I/O is in progress. If another backend does ReadBuffer for
> the same block they'll get the same buffer and then wait until the
> first backend's I/O completes. ReadBuffer goes through some hoops to
> handle this (and all the corner cases such as the other backend's I/O
> completing and the buffer being reused for another block before the
> first backend reawakens). It would be a shame to reinvent the wheel.

No re-invention!   Actually some effort has been made to use the
existing functions in bufmgr.c as much as possible rather than
rewriting them.

>
> The problem with using the Buffers I/O in progress bit is that the I/O
> might complete while the other backend is busy doing stuff. As long as
> you can handle the I/O completion promptly -- either in callback or
> thread or signal handler then that wouldn't matter. But I'm not clear
> that any of those will work reliably.

They both work reliably,  but the criticism was that backend B polling 
an aiocb of an aio issued by backend A is not documented as 
being supported  (although it happens to work),  hence the proposed
change to use sigevent.

By the way,   on the "will it actually work though?" question which several 
folks
have raised,    I should mention that this patch has been in semi-production 
use for almost 2 years now in different stages of completion on all postgresql
releases from 9.1.4 to 9.5 devel.       I would guess it has had around
500 hours of operation by now. I'm sure there are bugs still to be
found but I am confident it is fundamentally sound.
 
>
> --
> greg
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread David G Johnston
On Tue, Jun 24, 2014 at 9:20 AM, Vik Fearing [via PostgreSQL] <
ml-node+s1045698n5808882...@n5.nabble.com> wrote:

> On 06/22/2014 05:11 PM, Kevin Grittner wrote:
> > I found one substantive issue that had been missed in discussion,
> > though.  The patch modifies the postgres_fdw extension to make it
> > automatically exempt from an attempt to set a limit like this on
> > the server to which it connects.  I'm not sure that's a good idea.
> > Why should this type of connection be allowed to sit indefinitely
> > with an idle open transaction?  I'm inclined to omit this part of
> > the patch
>
> My reasoning for doing it the way I did is that if a transaction touches
> a foreign table and then goes bumbling along with other things the
> transaction is active but the connection to the remote server remains
> idle in transaction.  If it hits the timeout, when the local transaction
> goes to commit it errors out and you lose all your work.
>
> If the local transaction is actually idle in transaction and the local
> server doesn't have a timeout, we're no worse off than before this patch.
>

​Going off of this reading alone wouldn't we have to allow the client to
set the timeout on the fdw_server - to zero - to ensure reasonable
operation?  If the client has a process that requires ​10 minutes to
complete, and the foreign server has a default 5 minute timeout, if the
client does not disable the timeout on the server wouldn't the foreign
server always cause the process to abort?

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/idle-in-transaction-timeout-tp5805859p5808883.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.

Re: [HACKERS] idle_in_transaction_timeout

2014-06-24 Thread Vik Fearing
On 06/22/2014 05:11 PM, Kevin Grittner wrote:
> I found one substantive issue that had been missed in discussion,
> though.  The patch modifies the postgres_fdw extension to make it
> automatically exempt from an attempt to set a limit like this on
> the server to which it connects.  I'm not sure that's a good idea. 
> Why should this type of connection be allowed to sit indefinitely
> with an idle open transaction?  I'm inclined to omit this part of
> the patch

My reasoning for doing it the way I did is that if a transaction touches
a foreign table and then goes bumbling along with other things the
transaction is active but the connection to the remote server remains
idle in transaction.  If it hits the timeout, when the local transaction
goes to commit it errors out and you lose all your work.

If the local transaction is actually idle in transaction and the local
server doesn't have a timeout, we're no worse off than before this patch.
-- 
Vik


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #10728: json_to_recordset with nested json objects NULLs columns

2014-06-24 Thread Merlin Moncure
On Mon, Jun 23, 2014 at 8:43 PM, Tom Lane  wrote:
> * Nested json arrays are a bit more problematic.  What I'd ideally like
> is to spit them out in a form that would be successfully parsable as a SQL
> array of the appropriate element type.  Unfortunately, I think that that
> ship has sailed because json_populate_recordset failed to do that in 9.3.
> What we should probably do is define this the same as the nested object
> case, ie, we spit it out in *json* array format, meaning you can insert it
> into a text or json/jsonb field of the result record.  Maybe sometime in
> the future we can add a json-array-to-SQL-array converter function, but
> these functions won't do that.

Not quite following your logic here.  9.3 gave an error for an
internally nested array:

postgres=# create type foo as(a int, b int[]);
postgres=# select * from json_populate_recordset(null::foo, '[{"a": 1,
"b": [1,2,3]},{"a": 1, "b": [1,2,3]}]');
ERROR:  cannot call json_populate_recordset on a nested object

With your proposal this would still fail?  TBH, I'd rather this
function fail as above than implement a behavior we couldn't take back
later.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgaudit - an auditing extension for PostgreSQL

2014-06-24 Thread Robert Haas
On Mon, Jun 23, 2014 at 6:51 AM, Abhijit Menon-Sen  wrote:
> There are some unresolved questions with #2 because the extensible
> reloptions patch seems to have lost favour, but I'm pretty sure we
> could figure out some alternative.

I didn't particularly like the proposed *implementation* of extensible
reloptions, but I think the general concept has merit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-06-24 Thread Robert Haas
On Tue, Jun 24, 2014 at 7:45 AM, Sebastian Reitenbach
 wrote:
>> I'm building the vax packages for openbsd. What I can tell is that
>> for 5.5 no postgresql packages were built. But that may be that
>> due to the recent upgrade from gcc 2.95 to 3.3.
>> I guess that not all dependencies to actually build postgresql
>> are available for the vax, or may build successfully there. But I need
>> to verify. Might need a few days, since I'm currently on vacation,
>> with sparse Internet connectivity. ;)
>
> OK, that was easy:
>
> $ cd /usr/ports/databases/postgresql
> $ make install
> ===>  postgresql-client-9.3.4p0  requires shared libraries .
>
> OpenBSD VAX is static only, so no postgresql on OpenBSD
> VAX before shared libraries will ever be made working on it.

Thanks very much; that's useful information.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] API change advice: Passing plan invalidation info from the rewriter into the planner?

2014-06-24 Thread Robert Haas
On Mon, Jun 23, 2014 at 2:29 PM, Stephen Frost  wrote:
> What are these policies going to depend on?  Will they be allowed to
> overlap?  I don't see multi-policy support as being very easily added.

We discussed the point about overlap upthread, and I gave specific
examples.  If there's something else you want me to provide here,
please be more clear about it.

> If there are specific ways to design the syntax which would make it
> easier to support multiple policies in the future, I'm all for it.  Have
> any specific thoughts regarding that?

I did propose something already upthread, and then Dean said this:

# Note that the syntax proposed elsewhere --- GRANT SELECT (polname) ON
# TABLE tab TO role --- doesn't work because it conflicts with the
# syntax for granting column privileges, so there needs to be a distinct
# syntax for this, and I think it ought to ultimately allow things like
#
# GRANT SELECT (col1, col2), UPDATE (col1) ON t1 TO bob USING policy1;

He's got a good point there.  I don't know whether the policy should
be given inline (e.g. GRANT ... WHERE stuff()) or out-of-line (GRANT
... USING policy1) but it seems like specifying it as some sort of
GRANT modifier might make sense.  I'm sure there are other ways also,
of course.

>> >> - Require the user to specify in some way which of the available
>> >> policies they want applied, and then apply only that one.
>> >
>> > I'd want to at least see a way to apply an ordering to the policies
>> > being applied, or have PG work out which one is "cheapest" and try that
>> > one first.
>>
>> Cost-based comparison of policies that return different results
>> doesn't seem sensible to me.
>
> I keep coming back to the thought that, really, having multiple
> overlapping policies just adds unnecessary complication to the system
> for not much gain in real functionality.  Being able to specify a policy
> per-role might be useful, but that's only one dimension and I can
> imagine a lot of other dimensions that one might want to use to control
> which policy is used.

Well, I don't agree, and I've given examples upthread showing the
kinds of scenarios that I'm concerned about, which are drawn from real
experiences I've had.  It may be that I'm the only one who has had
such experiences, of course; or that there aren't enough people who
have to justify catering to such use cases.  But I'm not sure there's
much point in trying to have a conversation about how such a thing
could be made to work if you're just going to revert back to "well, we
don't really need this anyway" each time I make or refute a technical
point.

>> I think it would be a VERY bad idea to design the system around the
>> assumption that the RLS quals will be much more or less selective than
>> the user-supplied quals.  That's going to be different in different
>> environments.
>
> Fine- but do you really see the query planner having a problem pushing
> down whichever is the more selective qual, if the user-provided qual is
> marked as leakproof?

I'm not quite sure I understand the scenario you're describing here.
Can you provide a tangible example?  I expect that most of the things
the RLS-limited user might write in the WHERE clause will NOT get
pushed down because most functions are not leakproof.  However, the
issue I'm actually concerned about is whether the *security* qual is
simple enough to permit an index-scan.  Anything with an OR clause in
it probably won't be, and any function call definitely won't be.

> I realize that you want multiple policies because you'd like a way for
> the RLS qual to be made simpler for certain cases while also having more
> complex quals for other cases.  What I keep waiting to hear is exactly
> how you want to specify which policy is used because that's where it
> gets ugly and complicated.  I still really don't like the idea of trying
> to apply multiple policies inside of a single query execution.

See above comments.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add a filed to PageHeaderData

2014-06-24 Thread Soroosh Sardari
On Tue, Jun 24, 2014 at 3:27 PM, Andres Freund 
wrote:

> On 2014-06-24 15:23:54 +0430, Soroosh Sardari wrote:
> > On Tue, Jun 24, 2014 at 2:40 PM, Kevin Grittner 
> wrote:
> >
> > > Soroosh Sardari  wrote:
> > >
> > > > I check this problem with a virgin source code of
> > > > postgresql-9.3.2. So the bug is not for my codes.
> > >
> > > > By the way, following code has two different output and it is
> > > > weird.
> > >
> > > I can confirm that I see the difference in 9.3.2, and that I don't
> > > see the difference in 9.3.4.  Upgrade.
> > >
> > > http://www.postgresql.org/support/versioning/
> > >
> > > There's really no point in reporting a possible bug on a version
> > > with known bugs which have already had fixes published.
> > >
> > > --
> > > Kevin Grittner
> > > EDB: http://www.enterprisedb.com
> > > The Enterprise PostgreSQL Company
> > >
> >
> >
> > wow, it's arch-dependent.
> > in the 32-bit compiled of PG9.3.2 the code has same output and in 64-bit
> > binary of same code output is different!!
> >
> > The problem is not about the sql code I posted in the last email. Problem
> > could be different in any architecture,
> > In 32-bit or 64-bit architecture adding a char array of length 20 to
> > PageHeaderData cause error in regression test.
>
> You likely didn't adapt SizeOfPageHederData.
>
> Greetings,
>
> Andres Freund
>
> --
>  Andres Freund http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>



#define SizeOfPageHeaderData (offsetof(PageHeaderData, pd_linp))

I think ,the macro does not need any change!


Re: [HACKERS] Hooks Docu - list of hooks

2014-06-24 Thread geohas

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On 24/06/14 12:59, Abhijit Menon-Sen wrote:
> At 2014-06-24 12:49:17 +0200, li...@hasibether.at wrote:
>>
>> Is there a list of possible hooks, or maybe a little docu or overview?
>
> The best I found was "git grep _hook_type" and then read the code to
> understand when and why the hook was called.
>
>> Especially hooks to catch Insert, Update and Delete Stmts and
>> SubQuerys.
>>
>> It would help a lot to finish / write a log into Tables Module.
>
> Look at how pgaudit does it: https://github.com/2ndQuadrant/pgaudit
I already tried pgaudit ;), one of the best examples, it helped me much.
>
>
> The code has comments about how the various available hooks are used.
> (I was planning to implement a bgwriter that wrote log messages to a
> table, which sounds a bit like what you want to do.)
The module i'm thinking of, working on, is a bit inspired from pgaudit
and petere's pg_trashcan.
It should copy every created table in a "shadow"-schema with extra
columns for record on / record off and Userid (this is already working ;)).
In case of a drop statement it should rename the table in the shadow
schema XXX-droped-Date.

Now i am trying to catch the planned Stmts, ...
It should work without triggers - because the shadow schema should only
be visible for user postgres.

regards
geohas

>
>
> -- Abhijit
>
>

-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTqWQ4AAoJEJFGMlQe7wR/8CEIAJihWVGc//dDHGF9lDtMo3Ds
v1Xhd5U9n1tLL/Cx0/cqnslKctdfSCY2I/ptjNSDFO8U/YdUjNdPf4nYvxn0gjKR
n8VuC61BDr6qHFQvlJE7GLv2hs2GCxFM5dEgnV7foJjT18C/VgnSRFulJzxU87EZ
8uKG53+CM9ERDa5P9py9jyvrJJvIAXk9AAfevU9g+jimwK9OntwkC7ZfyVWEDwfr
x7LDyrzhge/EIco01pzJSimuVd0BPvTQ8V7XUTpy25xS+D8968wE8eRBaMWXH0b2
KR5lju+sz+SyVQKildcyExOEQWN3PgVmST5USAy9cAzPIuic+yR+qsa5H2VRTFI=
=ZYct
-END PGP SIGNATURE-



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hooks Docu - list of hooks

2014-06-24 Thread Abhijit Menon-Sen
At 2014-06-24 12:49:17 +0200, li...@hasibether.at wrote:
>
> Is there a list of possible hooks, or maybe a little docu or overview?

The best I found was "git grep _hook_type" and then read the code to
understand when and why the hook was called.

> Especially hooks to catch Insert, Update and Delete Stmts and
> SubQuerys.
> 
> It would help a lot to finish / write a log into Tables Module.

Look at how pgaudit does it: https://github.com/2ndQuadrant/pgaudit

The code has comments about how the various available hooks are used.
(I was planning to implement a bgwriter that wrote log messages to a
table, which sounds a bit like what you want to do.)

-- Abhijit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL in Windows console and Ctrl-C

2014-06-24 Thread MauMau

From: "Christian Ullrich" 
On non-Windows platforms, the --background option is not passed, and the 
option handling is unmodified except for an additional pair of braces. The 
postmaster does not pass the option to its children on any platform.
pg_ctl does not pass the option anywhere but on Windows, and postmaster.c 
does not recognize it anywhere else. If it is encountered on a platform 
where it does not make sense, it will be treated like any other (unknown) 
long option.


OK.



Restart is implemented as stop/start, so, yes.


Then, please mention restart mode as well like "start and restart mode" for 
clarification.


Regards
MauMau



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >