Re: [HACKERS] pg_ctl idempotent option

2013-01-15 Thread Dickson S. Guedes
2013/1/14 Tom Lane :
> Alvaro Herrera  writes:
>> Tom Lane wrote:
>>> Peter Eisentraut  writes:
 Here is a patch to add an option -I/--idempotent to pg_ctl, the result
 of which is that pg_ctl doesn't error on start or stop if the server is
 already running or already stopped.
>
>>> Idempotent is a ten-dollar word.  Can we find something that average
>>> people wouldn't need to consult a dictionary to understand?
>
>> --no-error perhaps?
>
> Meh, that's probably going too far in the direction of imprecision.
> The point of this patch is that only very specific errors are
> suppressed.
>
> I don't have a better idea though.  It'd be easier if there were
> separate switches for the two cases, then you could call them
> --ok-if-running and --ok-if-stopped.  But that's not very workable,
> if only because both would want the same single-letter abbreviation.

--ignore-status
--ignore-status-start
--ignore-status-stop

?

Regards
-- 
Dickson S. Guedes
mail/xmpp: gue...@guedesoft.net - skype: guediz
http://guedesoft.net - http://www.postgresql.org.br


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl idempotent option

2013-01-15 Thread Vik Reykja
On Mon, Jan 14, 2013 at 4:22 PM, Tom Lane  wrote:

> Peter Eisentraut  writes:
> > Here is a patch to add an option -I/--idempotent to pg_ctl, the result
> > of which is that pg_ctl doesn't error on start or stop if the server is
> > already running or already stopped.
>
> Idempotent is a ten-dollar word.  Can we find something that average
> people wouldn't need to consult a dictionary to understand?
>

I disagree that we should dumb things down when the word means exactly what
we want and based on the rest of this thread is the only word or word
cluster that carries the desired meaning.

I vote to keep --idempotent.

Vik


Re: [HACKERS] logical changeset generation v4

2013-01-15 Thread Andres Freund
On 2013-01-15 17:41:50 +1300, Mark Kirkwood wrote:
> On 15/01/13 17:37, Mark Kirkwood wrote:
> >On 15/01/13 14:38, Andres Freund wrote:
> >>Hi everyone,
> >>
> >>Here is the newest version of logical changeset generation.
> >>
> >>
> >
> >
> >
> >I'm quite interested in this feature - so tried applying the 19 patches to
> >the latest 9.3 checkout. Patch and compile are good.

Thanks! Any input welcome.

The git tree might make it easier to follow development ;)

> >However portals seem busted:
> >
> >bench=# BEGIN;
> >BEGIN
> >bench=# DECLARE c1 CURSOR FOR SELECT * FROM pgbench_accounts;
> >DECLARE CURSOR
> >bench=# FETCH 2 FROM c1;
> > aid | bid | abalance | filler
> >
> >-+-+--+-
> >
> >-
> >   1 |   1 |0 |
> >
> >   2 |   1 |0 |
> >
> >(2 rows)
> >
> >bench=# DELETE FROM pgbench_accounts WHERE CURRENT OF c1;
> >The connection to the server was lost. Attempting reset: Failed.
> >
> 
> Sorry - forgot to add: assert and debug build, and it is an assertion
> failure that is being picked up:
> 
> TRAP: FailedAssertion("!(htup->t_tableOid != ((Oid) 0))", File: "tqual.c",
> Line: 940)

I unfortunately don't see the error here, I guess its related to how
stack is reused. But I think I found the error, check the attached patch
which I also pushed to the git repository.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
>From 25bd9aeefb03ec39ff1d1cbbac4d2507d533f6d1 Mon Sep 17 00:00:00 2001
From: Andres Freund 
Date: Tue, 15 Jan 2013 11:50:33 +0100
Subject: [PATCH] wal_decoding: mergeme *Satisfies: Setup a correct
 tup->t_tableOid in heap_get_latest_tid

Code review found one other case where tableOid potentially didn'T get set, in
nodeBitmapHeapscan. Thats fixed as well.

Found independently by Mark Kirkwood and Abhijit Menon-Sen
---
 src/backend/access/heap/heapam.c  | 1 +
 src/backend/executor/nodeBitmapHeapscan.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 1ff58a4..3346c8a 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1789,6 +1789,7 @@ heap_get_latest_tid(Relation relation,
 		tp.t_self = ctid;
 		tp.t_data = (HeapTupleHeader) PageGetItem(page, lp);
 		tp.t_len = ItemIdGetLength(lp);
+		tp.t_tableOid = RelationGetRelid(relation);
 
 		/*
 		 * After following a t_ctid link, we might arrive at an unrelated
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index c83f972..eda1394 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -258,6 +258,7 @@ BitmapHeapNext(BitmapHeapScanState *node)
 
 		scan->rs_ctup.t_data = (HeapTupleHeader) PageGetItem((Page) dp, lp);
 		scan->rs_ctup.t_len = ItemIdGetLength(lp);
+		scan->rs_ctup.t_tableOid = scan->rs_rd->rd_id;
 		ItemPointerSet(&scan->rs_ctup.t_self, tbmres->blockno, targoffset);
 
 		pgstat_count_heap_fetch(scan->rs_rd);
-- 
1.7.12.289.g0ce9864.dirty


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4

2013-01-15 Thread Andres Freund
On 2013-01-15 01:00:00 -0300, Alvaro Herrera wrote:
> Andres Freund wrote:
> 
> I've been giving a couple of these parts a look.  In particular
> 
> > [03] Split out xlog reading into its own module called xlogreader
> 
> Cleaned this one up a bit last week.  I will polish it some more,
> publish for some final comments, and commit.

I have some smaller bugfixes in my current version that you probably
don't have yet (on grounds of being fixed this weekend)... So we need to
be a bit careful not too loose those.
> > Second, I don't think the test_logical_replication functions should live
> > in core as they shouldn't be used for a production replication scenario
> > (causes longrunning transactions, requires polling) , but I have failed
> > to find a neat way to include a contrib extension in the plain
> > regression tests.
> 
> I think this would work if you make a "stamp" file in the contrib
> module, similar to how doc/src/sgml uses those.

I tried that, the problem is not the building itself but getting it
installed into the temporary installation...
But anyway, testing decoding requires a different wal_level so I was
hesitant to continue on grounds of that alone.
Should we just up that? Its probably problematic for tests arround some
WAL optimizations an such?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [PATCH] Compile without warning with gcc's -Wtype-limits, -Wempty-body

2013-01-15 Thread Andres Freund
On 2013-01-14 22:26:39 -0500, Tom Lane wrote:
> Andres Freund  writes:
> > On 2013-01-14 20:39:05 -0500, Peter Eisentraut wrote:
> >> On Tue, 2013-01-15 at 00:29 +0100, Andres Freund wrote:
> >>> Independently from this patch, should we add -Wtype-limits to the
> >>> default parameters?
> 
> >> I think we have had a discussion along this line before.  I am against
> >> fixing warnings from this option, because those changes would hide
> >> errors if a variable's type changed from signed to unsigned or vice
> >> versa, which could happen because of refactoring or it might be
> >> dependent on system headers.
> 
> > Well, I already found a bug (although with very limited consequences) in
> > the walsender code and one with graver consequences in code I just
> > submitted. So I don't really see that being on-par with some potential
> > future refactoring...
> 
> FWIW, I agree with Peter --- in particular, warning against "x >= MIN"
> just because MIN happens to be zero and x happens to be unsigned is the
> sort of nonsense up with which we should not put.  Kowtowing to that
> kind of warning makes the code less robust, not more so.

Oh, I agree, that warning is pointless in itself.

But in general doing a comparison like > 0 *can* show a programming
error as evidenced e.g. by
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3f4b1749a8168893558f70021be4f40c650bbada
and just about the same error I made in xlogdump.

I just think that the price of fixing a single Assert() that hasn't
changed in years where the variable isn't likely to ever get signed is
acceptable.

> It's a shame that the compiler writers have not figured this out and
> separated misguided pedantry from actually-useful warnings.  If I assign
> -1 to an unsigned variable, by all means tell me about *that*.  Don't
> tell me your opinion of whether an assertion check is necessary.

Yea, but I have to admit its damned hard hard to automatically discern
the above actually valid warning and the bogus Assert...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] replace plugins directory with GUC

2013-01-15 Thread Peter Eisentraut
On Tue, 2012-10-09 at 20:45 -0400, Peter Eisentraut wrote:
> About that plugins directory ($libdir/plugins) ... I don't think we
> ever
> really got that to work sensibly.  I don't remember the original
> design
> discussion, but I have seen a number of explanations offered over the
> years.  It's not clear who decides what to put in there (plugin
> author,
> packager, DBA?), how to put it there (move it, copy it, symlink it? --
> no support in pgxs), and based on what criteria.
> 
> It would seem to be much more in the spirit of things to simply list
> the
> allowed plugins in a GUC variable, like
> 
> some_clever_name_here = $libdir/this, $libdir/that 

Here is a patch, with some_clever_name = user_loadable_libraries.

There are obviously some conflict/transition issues with using
user_loadable_libraries vs the plugins directory.  I have tried to
explain the mechanisms in the documentation, but there are other choices
possible in some situations.

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index b7df8ce..e276dd6 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5547,16 +5547,25 @@ Other Defaults

 

-Because this is not a superuser-only option, the libraries
-that can be loaded are restricted to those appearing in the
-plugins subdirectory of the installation's
-standard library directory.  (It is the database administrator's
+Because this is not a superuser-only option, the libraries that can be
+loaded are restricted.  Either they have to be listed
+in  or they have to be
+stored in the plugins subdirectory of the installation's
+standard library directory.  It is the database administrator's
 responsibility to ensure that only safe libraries
-are installed there.)  Entries in local_preload_libraries
-can specify this directory explicitly, for example
-$libdir/plugins/mylib, or just specify
-the library name — mylib would have
-the same effect as $libdir/plugins/mylib.
+are enabled in these ways.
+   
+
+   
+Entries in local_preload_libraries
+can specify the plugins directory explicitly, for example
+$libdir/plugins/mylib.  If they don't, but the
+plugins directory exists, then a library name
+without a slash, such as mylib will be expanded
+to $libdir/plugins/mylib.  If
+the plugins directory does not exist, then all
+listed names would need to be listed
+in user_loadable_libraries.

 

@@ -5584,6 +5593,27 @@ Other Defaults
   
  
 
+ 
+  user_loadable_libraries (string)
+  
+   user_loadable_libraries configuration parameter
+  
+  
+   
+This parameter specifies a list of shared libraries that unprivileged
+users can load using either the LOAD command or by
+listing them in .  The
+library names listed here and the libraries names invoked by one the
+mentioned methods must match verbatim in order for the loading to be
+allowed.
+   
+
+   
+Using this parameter is an alternative to placing those libraries into
+the plugins directory.
+   
+  
+  
  
 

diff --git a/doc/src/sgml/ref/load.sgml b/doc/src/sgml/ref/load.sgml
index f44f313..fc77bd5 100644
--- a/doc/src/sgml/ref/load.sgml
+++ b/doc/src/sgml/ref/load.sgml
@@ -51,11 +51,12 @@ Description
 
   
Non-superusers can only apply LOAD to library files
+   listed in  or
located in $libdir/plugins/ — the specified
filename must begin
-   with exactly that string.  (It is the database administrator's
+   with exactly that string.  It is the database administrator's
responsibility to ensure that only safe libraries
-   are installed there.)
+   are enabled in these ways.
   
  
 
diff --git a/src/backend/utils/fmgr/dfmgr.c b/src/backend/utils/fmgr/dfmgr.c
index 562a7c9..eab1d2b 100644
--- a/src/backend/utils/fmgr/dfmgr.c
+++ b/src/backend/utils/fmgr/dfmgr.c
@@ -23,6 +23,8 @@
 #endif
 #include "lib/stringinfo.h"
 #include "miscadmin.h"
+#include "nodes/pg_list.h"
+#include "utils/builtins.h"
 #include "utils/dynamic_loader.h"
 #include "utils/hsearch.h"
 
@@ -70,6 +72,7 @@
 #endif
 
 char	   *Dynamic_library_path;
+char	   *user_loadable_libraries_string;
 
 static void *internal_load_library(const char *libname);
 static void incompatible_module_error(const char *libname,
@@ -538,12 +541,44 @@ static void incompatible_module_error(const char *libname,
 static void
 check_restricted_library_name(const char *name)
 {
-	if (strncmp(name, "$libdir/plugins/", 16) != 0 ||
-		first_dir_separator(name + 16) != NULL)
-		ereport(ERROR,
-(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- errmsg("access to library \"%s\" is not allowed",
-		name)));
+	char	   *rawstring;
+	List	   *elemlist;
+	ListCell   *l;
+
+	rawstring = pstrdup(

Re: [HACKERS] pg_ctl idempotent option

2013-01-15 Thread Alvaro Herrera
Vik Reykja escribió:
> On Mon, Jan 14, 2013 at 4:22 PM, Tom Lane  wrote:
> 
> > Peter Eisentraut  writes:
> > > Here is a patch to add an option -I/--idempotent to pg_ctl, the result
> > > of which is that pg_ctl doesn't error on start or stop if the server is
> > > already running or already stopped.
> >
> > Idempotent is a ten-dollar word.  Can we find something that average
> > people wouldn't need to consult a dictionary to understand?
> >
> 
> I disagree that we should dumb things down when the word means exactly what
> we want and based on the rest of this thread is the only word or word
> cluster that carries the desired meaning.
> 
> I vote to keep --idempotent.

Yeah, after seeing the alternatives, I agree that it seems okay,
particularly given that the --help output explains the behavior in
dime-a-dozen words.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4

2013-01-15 Thread Alvaro Herrera
Andres Freund wrote:
> On 2013-01-15 01:00:00 -0300, Alvaro Herrera wrote:
> > Andres Freund wrote:
> > 
> > I've been giving a couple of these parts a look.  In particular
> > 
> > > [03] Split out xlog reading into its own module called xlogreader
> > 
> > Cleaned this one up a bit last week.  I will polish it some more,
> > publish for some final comments, and commit.
> 
> I have some smaller bugfixes in my current version that you probably
> don't have yet (on grounds of being fixed this weekend)... So we need to
> be a bit careful not too loose those.

Sure.  Do you have them as individual commits?  I'm assuming you rebased
the tree.  Maybe in your reflog?  IIRC I also have at least one minor
bug fix.

> > > Second, I don't think the test_logical_replication functions should live
> > > in core as they shouldn't be used for a production replication scenario
> > > (causes longrunning transactions, requires polling) , but I have failed
> > > to find a neat way to include a contrib extension in the plain
> > > regression tests.
> > 
> > I think this would work if you make a "stamp" file in the contrib
> > module, similar to how doc/src/sgml uses those.
> 
> I tried that, the problem is not the building itself but getting it
> installed into the temporary installation...

Oh, hm.  Maybe the contrib module's make installcheck, then?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4

2013-01-15 Thread Andres Freund
On 2013-01-15 09:56:41 -0300, Alvaro Herrera wrote:
> Andres Freund wrote:
> > On 2013-01-15 01:00:00 -0300, Alvaro Herrera wrote:
> > > Andres Freund wrote:
> > > 
> > > I've been giving a couple of these parts a look.  In particular
> > > 
> > > > [03] Split out xlog reading into its own module called xlogreader
> > > 
> > > Cleaned this one up a bit last week.  I will polish it some more,
> > > publish for some final comments, and commit.
> > 
> > I have some smaller bugfixes in my current version that you probably
> > don't have yet (on grounds of being fixed this weekend)... So we need to
> > be a bit careful not too loose those.
> 
> Sure.  Do you have them as individual commits?  I'm assuming you rebased
> the tree.  Maybe in your reflog?  IIRC I also have at least one minor
> bug fix.

I can check, which commit did you base your modifications on?

> > > > Second, I don't think the test_logical_replication functions should live
> > > > in core as they shouldn't be used for a production replication scenario
> > > > (causes longrunning transactions, requires polling) , but I have failed
> > > > to find a neat way to include a contrib extension in the plain
> > > > regression tests.
> > > 
> > > I think this would work if you make a "stamp" file in the contrib
> > > module, similar to how doc/src/sgml uses those.
> > 
> > I tried that, the problem is not the building itself but getting it
> > installed into the temporary installation...
> 
> Oh, hm.  Maybe the contrib module's make installcheck, then?

Thats what I do right now, but I really would prefer to have it checked
during normal make checks, installchecks aren't run all that commonly :(

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] COPY .. COMPRESSED

2013-01-15 Thread Greg Stark
On Tue, Jan 15, 2013 at 2:33 AM, Stephen Frost  wrote:
> * Peter Eisentraut (pete...@gmx.net) wrote:
>> On 1/14/13 11:28 AM, Stephen Frost wrote:
>> > While there is no option currently for having the server do the
>> > compression before sending the data over the wire.
>>
>> OpenSSL?
>
> To be honest, I expected that to come up earlier in this discussion.
> It'd be redundant to use OpenSSL for compression and then ALSO do
> compression on the client side to save into a custom format dump.

For what it's worth there was a security announcement not long ago
that made OpenSSL disable compression in streams by default. I'm not
sure if it's relevant to Postgres or not.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] another idea for changing global configuration settings from SQL

2013-01-15 Thread Peter Eisentraut
On 11/15/12 12:53 PM, Peter Eisentraut wrote:
> We already have the ability to store in pg_db_role_setting configuration
> settings for
> 
> specific user, specific database
> specific user, any database
> any user, specific database
> 
> The existing infrastructure would also support
> 
> any user, any database (= all the time)
> 
> All you'd need is to add
> 
> ApplySetting(InvalidOid, InvalidOid, relsetting, PGC_S_$SOMETHING);
> 
> in postinit.c, and have some SQL command to modify this setting.

Here is a patch for that.

The internals are straightforward.  Actually, we might want to refactor
this a bit later, unifying the AlterRoleSet and AlterDatabaseSet parse
nodes and the functions that do the work, because it's really all the same.

The SQL level interface is a bit odd.  The existing facilities are

ALTER ROLE / SET
ALTER DATABASE / SET
ALTER ROLE / IN DATABASE / SET

Since the original design somehow considered roles to be superior to
databases in this regard, I added the global setting as ALTER ROLE ALL
SET ..., but that's obviously arbitrary.  Most other variants would
probably be much more invasive, though.

diff --git a/doc/src/sgml/ref/alter_role.sgml b/doc/src/sgml/ref/alter_role.sgml
index 2fbba53..6fa51ee 100644
--- a/doc/src/sgml/ref/alter_role.sgml
+++ b/doc/src/sgml/ref/alter_role.sgml
@@ -39,9 +39,9 @@
 ALTER ROLE name RENAME TO 
new_name
 
 ALTER ROLE name [ IN DATABASE 
database_name ] SET 
configuration_parameter { TO | = } { 
value | DEFAULT }
-ALTER ROLE name [ IN DATABASE 
database_name ] SET 
configuration_parameter FROM CURRENT
-ALTER ROLE name [ IN DATABASE 
database_name ] RESET 
configuration_parameter
-ALTER ROLE name [ IN DATABASE 
database_name ] RESET ALL
+ALTER ROLE [ name | ALL ] [ IN 
DATABASE database_name ] SET 
configuration_parameter FROM CURRENT
+ALTER ROLE [ name | ALL ] [ IN 
DATABASE database_name ] RESET 
configuration_parameter
+ALTER ROLE [ name | ALL ] [ IN 
DATABASE database_name ] RESET ALL
 
  
 
@@ -83,8 +83,15 @@ Description
   
The remaining variants change a role's session default for a configuration
variable, either for all databases or, when the IN
-   DATABASE clause is specified, only for sessions in
-   the named database. Whenever the role subsequently
+   DATABASE clause is specified, only for sessions in the named
+   database.  If ALL is specified instead of a role name,
+   this changes the setting for all roles.  Using ALL
+   with IN DATABASE is effectively the same as using the
+   command ALTER DATABASE ... SET 
+  
+
+  
+   Whenever the role subsequently
starts a new session, the specified value becomes the session
default, overriding whatever setting is present in
postgresql.conf or has been received from the 
postgres
@@ -93,12 +100,17 @@ Description
 does not cause new
configuration values to be set.
Settings set for all databases are overridden by database-specific settings
-   attached to a role.
+   attached to a role.  Settings for specific databases or specific roles 
override
+   settings for all roles.
+  
+
+  
Superusers can change anyone's session defaults. Roles having
CREATEROLE privilege can change defaults for non-superuser
roles. Ordinary roles can only set defaults for themselves.
Certain configuration variables cannot be set this way, or can only be
-   set if a superuser issues the command.
+   set if a superuser issues the command.  Only superusers can change a setting
+   for all roles in all databases.
   
  
 
@@ -307,6 +319,7 @@ See Also
   


+   

   
  
diff --git a/src/backend/commands/user.c b/src/backend/commands/user.c
index 3ba877d..5edb59a 100644
--- a/src/backend/commands/user.c
+++ b/src/backend/commands/user.c
@@ -814,41 +814,46 @@ static void DelRoleMems(const char *rolename, Oid roleid,
 {
HeapTuple   roletuple;
Oid databaseid = InvalidOid;
-   Oid roleid;
+   Oid roleid = InvalidOid;
 
-   roletuple = SearchSysCache1(AUTHNAME, PointerGetDatum(stmt->role));
+   if (stmt->role)
+   {
+   roletuple = SearchSysCache1(AUTHNAME, 
PointerGetDatum(stmt->role));
 
-   if (!HeapTupleIsValid(roletuple))
-   ereport(ERROR,
-   (errcode(ERRCODE_UNDEFINED_OBJECT),
-errmsg("role \"%s\" does not exist", 
stmt->role)));
+   if (!HeapTupleIsValid(roletuple))
+   ereport(ERROR,
+   (errcode(ERRCODE_UNDEFINED_OBJECT),
+errmsg("role \"%s\" does not exist", 
stmt->role)));
 
-   roleid = HeapTupleGetOid(roletuple);
+   roleid = HeapTupleGetOid(roletuple);
 
-   /*
-* Obtain a lock on the role and make sure it didn't go away in the
-* meantime.
-*/
-   shdepLockAndCheckObject(AuthIdRelationId, HeapTupleGetOid(roletuple));
+   

[HACKERS] Teaching pg_receivexlog to follow timeline switches

2013-01-15 Thread Heikki Linnakangas
Now that a standby server can follow timeline switches through streaming 
replication, we should do teach pg_receivexlog to do the same. Patch 
attached.


I made one change to the way START_STREAMING command works, to better 
support this. When a standby server reaches the timeline it's streaming 
from the master, it stops streaming, fetches any missing timeline 
history files, and parses the history file of the latest timeline to 
figure out where to continue. However, I don't want to parse timeline 
history files in pg_receivexlog. Better to keep it simple. So instead, I 
modified the server-side code for START_STREAMING to return the next 
timeline's ID at the end, and used that in pg_receivexlog. I also 
modifed BASE_BACKUP to return not only the start XLogRecPtr, but also 
the corresponding timeline ID. Otherwise we might try to start streaming 
from wrong timeline if you issue a BASE_BACKUP at the same moment the 
server switches to a new timeline.


When pg_receivexlog switches timeline, what to do with the partial file 
on the old timeline? When the timeline changes in the middle of a WAL 
segment, the segment old the old timeline is only half-filled. For 
example, when timeline changes from 1 to 2, you'll have this in pg_xlog:


00010006
00010007
00010008
00020008
0002.history

The segment 00010008 is only half-filled, as the 
timeline changed in the middle of that segment. The beginning portion of 
that file is duplicated in 00020008, with the 
timeline-changing checkpoint record right after the duplicated portion.


When we stream that with pg_receivexlog, and hit the timeline switch, 
we'll have this situation in the client:


00010006
00010007
00010008.partial

What to do with the partial file? One option is to rename it to 
00010008. However, if you then kill pg_receivexlog 
before it has finished streaming a full segment from the new timeline, 
on restart it will try to begin streaming WAL segment 
00010009, because it sees that segment 
00010008 is already completed. That'd be wrong.


The best option seems to be to just leave the .partial file in place, so 
as streaming progresses, you end up with:


00010006
00010007
00010008.partial
00020008
00020009
0002000A.partial

It feels a bit confusing to have that old partial file there, but that 
seems like the most correct solution. That file is indeed partial. This 
also ensures that if the server running on timeline 1 continues to 
generate new WAL, and it fills 00010008, we won't 
confuse the partial segment with that name with a full one.


- Heikki
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index e14627c..baae59d 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1418,8 +1418,10 @@ The commands accepted in walsender mode are:
  
   After streaming all the WAL on a timeline that is not the latest one,
   the server will end streaming by exiting the COPY mode. When the client
-  acknowledges this by also exiting COPY mode, the server responds with a
-  CommandComplete message, and is ready to accept a new command.
+  acknowledges this by also exiting COPY mode, the server sends a
+  single-row, single-column result set indicating the next timeline in
+  this server's history. That is followed by a CommandComplete message,
+  and the server is ready to accept a new command.
  
 
  
@@ -1784,7 +1786,9 @@ The commands accepted in walsender mode are:
  
  
   The first ordinary result set contains the starting position of the
-  backup, given in XLogRecPtr format as a single column in a single row.
+  backup, in a single row with two columns. The first column contains
+  the start position given in XLogRecPtr format, and the second column
+  contains the corresponding timeline ID.
  
  
   The second ordinary result set has one row for each tablespace.
@@ -1827,7 +1831,9 @@ The commands accepted in walsender mode are:
   ustar interchange format specified in the POSIX 1003.1-2008
   standard) dump of the tablespace contents, except that the two trailing
   blocks of zeroes specified in the standard are omitted.
-  After the tar data is complete, a final ordinary result set will be sent.
+  After the tar data is complete, a final ordinary result set will be sent,
+  containing the WAL end position of the backup, in the same format as
+  the start position.
  
 
  
diff --git a/src/backend/access/transam/timeline.c b/src/backend/access/transam/timeline.c
index 46379bb..ad4f316 100644
--- a/src/backend/access/transam/timeline.c
+++ b/src/backend/access/transam/timeline.c
@@ -5

Re: [HACKERS] recent ALTER whatever .. SET SCHEMA refactoring

2013-01-15 Thread Alvaro Herrera
Alvaro Herrera escribió:
> Kohei KaiGai escribió:
> 
> > I'm probably saying same idea. It just adds invocation of external
> > functions to check naming conflicts of functions or collation; that
> > takes additional 4-lines for special case handling
> > in AlterObjectNamespace_internal().
> 
> Okay, I can agree with this implementation plan.

Actually, now that I look again, this is all completely broken, because
the "object already exists in schema foo" message is using
getObjectDescription infrastructure, which we agree to be completely
wrong.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] system administration functions with hardcoded superuser checks

2013-01-15 Thread Peter Eisentraut
On 12/18/12 12:09 PM, Peter Eisentraut wrote:
> There are some system administration functions that have hardcoded
> superuser checks, specifically:
> 
> pg_reload_conf
> pg_rotate_logfile
> pg_read_file
> pg_read_file_all
> pg_read_binary_file
> pg_read_binary_file_all
> pg_stat_file
> pg_ls_dir
> 
> Some of these are useful in monitoring or maintenance tools, and the
> hardcoded superuser checks require that these tools run with maximum
> privileges.  Couldn't we just install these functions without default
> privileges and allow users to grant privileges as necessary?

This is still being debated, but just for the heck of it, here is a
patch for how this implementation would look like.

diff --git a/src/backend/catalog/system_views.sql 
b/src/backend/catalog/system_views.sql
index c479c23..f6c2c53 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -773,3 +773,16 @@ CREATE FUNCTION ts_debug(IN document text,
 CREATE OR REPLACE FUNCTION
   pg_start_backup(label text, fast boolean DEFAULT false)
   RETURNS text STRICT VOLATILE LANGUAGE internal AS 'pg_start_backup';
+
+
+/*
+Revoke privileges for functions that should be accessible by
+superusers only by default.  We can't set the privileges in pg_proc.h,
+because bootstrap mode doesn't handle aclitem arrays.
+*/
+REVOKE EXECUTE ON FUNCTION pg_read_file(text, bigint, bigint) FROM PUBLIC;
+REVOKE EXECUTE ON FUNCTION pg_read_file(text) FROM PUBLIC;
+REVOKE EXECUTE ON FUNCTION pg_read_binary_file(text, bigint, bigint) FROM 
PUBLIC;
+REVOKE EXECUTE ON FUNCTION pg_read_binary_file(text) FROM PUBLIC;
+REVOKE EXECUTE ON FUNCTION pg_stat_file(text) FROM PUBLIC;
+REVOKE EXECUTE ON FUNCTION pg_ls_dir(text) FROM PUBLIC;
diff --git a/src/backend/utils/adt/genfile.c b/src/backend/utils/adt/genfile.c
index f53c7d4..c545dd8 100644
--- a/src/backend/utils/adt/genfile.c
+++ b/src/backend/utils/adt/genfile.c
@@ -174,11 +174,6 @@
int64   bytes_to_read = PG_GETARG_INT64(2);
char   *filename;
 
-   if (!superuser())
-   ereport(ERROR,
-   (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-(errmsg("must be superuser to read files";
-
filename = convert_and_check_filename(filename_t);
 
if (bytes_to_read < 0)
@@ -198,11 +193,6 @@
text   *filename_t = PG_GETARG_TEXT_P(0);
char   *filename;
 
-   if (!superuser())
-   ereport(ERROR,
-   (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-(errmsg("must be superuser to read files";
-
filename = convert_and_check_filename(filename_t);
 
PG_RETURN_TEXT_P(read_text_file(filename, 0, -1));
@@ -219,11 +209,6 @@
int64   bytes_to_read = PG_GETARG_INT64(2);
char   *filename;
 
-   if (!superuser())
-   ereport(ERROR,
-   (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-(errmsg("must be superuser to read files";
-
filename = convert_and_check_filename(filename_t);
 
if (bytes_to_read < 0)
@@ -243,11 +228,6 @@
text   *filename_t = PG_GETARG_TEXT_P(0);
char   *filename;
 
-   if (!superuser())
-   ereport(ERROR,
-   (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-(errmsg("must be superuser to read files";
-
filename = convert_and_check_filename(filename_t);
 
PG_RETURN_BYTEA_P(read_binary_file(filename, 0, -1));
@@ -267,11 +247,6 @@
HeapTuple   tuple;
TupleDesc   tupdesc;
 
-   if (!superuser())
-   ereport(ERROR,
-   (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-(errmsg("must be superuser to get file 
information";
-
filename = convert_and_check_filename(filename_t);
 
if (stat(filename, &fst) < 0)
@@ -331,11 +306,6 @@
struct dirent *de;
directory_fctx *fctx;
 
-   if (!superuser())
-   ereport(ERROR,
-   (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-(errmsg("must be superuser to get directory 
listings";
-
if (SRF_IS_FIRSTCALL())
{
MemoryContext oldcontext;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strange OOM errors with EXECUTE in PL/pgSQL

2013-01-15 Thread Jan Wieck

On 12/20/2012 4:47 PM, Dimitri Fontaine wrote:

Tom Lane  writes:

The reason this fails is that you've got a half-megabyte source string,
and each of the 11000 plans that are due to be created from it saves
its own copy of the source string.  Hence, 5500 megabytes needed just
for source strings.

We could possibly fix this by inventing some sort of reference-sharing
arrangement (which'd be complicated and fragile) or by not storing the
source strings with the plans (which'd deal a serious blow to our
ability to provide helpful error messages).  Neither answer seems
appealing.


I don't readily see how complicated and fragile it would be, it looks
like a hash table of symbols pointing to source strings and a reference
counting, and each plan would need to reference that symbol. Now maybe
that's what you call complicated and fragile, and even if not, I'm not
really sure it would pull its weight. The use case of sending over and
over again *in a given session* the exact same query string without
using PREPARE/EXECUTE looks like quite tiny.



That sounds like a bit of overkill to me.

Don't all the plans result as a plan list from a multi-statement query 
string, which was parsed into a query tree "list" and each single query 
tree then planned? I don't think there is any way that a single one of 
those trees (parse or plan) will be free'd separately. If that is true, 
then proper usage of memory contexts would make reference counting 
obsolete, even though all plans refer to the same copy.



Jan

--
Anyone who trades liberty for security deserves neither
liberty nor security. -- Benjamin Franklin


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] recent ALTER whatever .. SET SCHEMA refactoring

2013-01-15 Thread Kohei KaiGai
2013/1/15 Alvaro Herrera :
> Alvaro Herrera escribió:
>> Kohei KaiGai escribió:
>>
>> > I'm probably saying same idea. It just adds invocation of external
>> > functions to check naming conflicts of functions or collation; that
>> > takes additional 4-lines for special case handling
>> > in AlterObjectNamespace_internal().
>>
>> Okay, I can agree with this implementation plan.
>
> Actually, now that I look again, this is all completely broken, because
> the "object already exists in schema foo" message is using
> getObjectDescription infrastructure, which we agree to be completely
> wrong.
>
http://www.postgresql.org/message-id/cadyhkswvqaa6if5wvuw5ezlaiyycyee2zo9guqnky8frdlx...@mail.gmail.com

Does this patch help the trouble?
It adds ereport_on_namespace_conflict() for error message generation instead of
getObjectDescription() for ALTER RENAME primarily, but I also noticed it can be
applied on getObjectDescription() of AlterObjectNamespace_internal.

Thanks,
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] COPY .. COMPRESSED

2013-01-15 Thread Claudio Freire
On Mon, Jan 14, 2013 at 11:33 PM, Stephen Frost  wrote:
> Now, protocol-level on-the-wire compression
> is another option, but there's quite a few drawbacks to that and quite a
> bit of work involved.  Having support for COPY-based compression could
> be an answer for many cases where on-the-wire compression is desirable.

Like?

Postgres' packetized protocol lends itself quite well for this kind of
thing. It could even be done on a packet-by-packet basis. The only
drawback I see, is that it pretty much rules out piping through
arbitrary commands (a protocol needs to be very clearly defined).


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] recent ALTER whatever .. SET SCHEMA refactoring

2013-01-15 Thread Alvaro Herrera
Kohei KaiGai escribió:
> 2013/1/15 Alvaro Herrera :
> > Alvaro Herrera escribió:
> >> Kohei KaiGai escribió:
> >>
> >> > I'm probably saying same idea. It just adds invocation of external
> >> > functions to check naming conflicts of functions or collation; that
> >> > takes additional 4-lines for special case handling
> >> > in AlterObjectNamespace_internal().
> >>
> >> Okay, I can agree with this implementation plan.
> >
> > Actually, now that I look again, this is all completely broken, because
> > the "object already exists in schema foo" message is using
> > getObjectDescription infrastructure, which we agree to be completely
> > wrong.
> >
> http://www.postgresql.org/message-id/cadyhkswvqaa6if5wvuw5ezlaiyycyee2zo9guqnky8frdlx...@mail.gmail.com
> 
> Does this patch help the trouble?
> It adds ereport_on_namespace_conflict() for error message generation instead 
> of
> getObjectDescription() for ALTER RENAME primarily, but I also noticed it can 
> be
> applied on getObjectDescription() of AlterObjectNamespace_internal.

I was just going to look into that patch, thanks.

Anyway I noticed that the getObjectDescriptionOids() in that path has
been there since 9.1 introduced generic object support for SET SCHEMA in
55109313.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER command reworks

2013-01-15 Thread Alvaro Herrera
Kohei KaiGai escribió:

> The attached patch is a rebased version towards the latest master
> branch, and fix up the issue around error messages on name conflicts.

I assume the lock.c changes are just a bollixed merge, right?

I am not sure about some of the changes in the regression test expected
output; are we really okay with losing the "in schema foo" part in some
of these cases?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strange OOM errors with EXECUTE in PL/pgSQL

2013-01-15 Thread Tom Lane
Jan Wieck  writes:
> On 12/20/2012 4:47 PM, Dimitri Fontaine wrote:
>> Tom Lane  writes:
>>> The reason this fails is that you've got a half-megabyte source string,
>>> and each of the 11000 plans that are due to be created from it saves
>>> its own copy of the source string.  Hence, 5500 megabytes needed just
>>> for source strings.
>>> 
>>> We could possibly fix this by inventing some sort of reference-sharing
>>> arrangement (which'd be complicated and fragile) or by not storing the
>>> source strings with the plans (which'd deal a serious blow to our
>>> ability to provide helpful error messages).  Neither answer seems
>>> appealing.

> Don't all the plans result as a plan list from a multi-statement query 
> string, which was parsed into a query tree "list" and each single query 
> tree then planned? I don't think there is any way that a single one of 
> those trees (parse or plan) will be free'd separately. If that is true, 
> then proper usage of memory contexts would make reference counting 
> obsolete, even though all plans refer to the same copy.

The issue is that a multi-statement string gives rise to multiple
CachedPlanSources, which could be freed independently so far as
plancache.c knows.  (spi.c wouldn't actually attempt to do so.)
So you'd really need reference counting, or else some explicit
connection between the CachedPlanSources, neither of which seems
exactly trivial to me.

As of HEAD this particular complaint is moot anyway, because SPI_execute
now goes through the "one-shot CachedPlan" facility, and so it makes no
(zero) copies of the source string.

It'd still be possible to hit the problem when trying to SPI_prepare a
very-many-statement string, but I think the use case for that is pretty
darn small.  So I'm not excited about adding complication to fix it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] replace plugins directory with GUC

2013-01-15 Thread Tom Lane
Peter Eisentraut  writes:
> Here is a patch, with some_clever_name = user_loadable_libraries.

Shouldn't user_loadable_libraries be PGC_SUSET?  PGC_SIGHUP is typically
used when the same setting must apply in all active sessions, but it's
not clear why such a restriction must be made for this.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] COPY .. COMPRESSED

2013-01-15 Thread Peter Eisentraut
On 1/13/13 9:16 PM, Stephen Frost wrote:
> On top of this I plan to submit a trivial patch to add support for
>   this to file_fdw, allowing creation of FDW tables which operate
>   directly on compressed files (including CSVs, which is what I need
>   this patch for).
> 
>   I've also begun working on a patch to allow this capability to be used
>   through pg_dump/pg_restore which would reduce the bandwidth used
>   between the client and the server for backups and restores.  Ideally,
>   one would also be able to use custom format dumps, with compression,
>   even if the client-side pg_dump/pg_restore wasn't compiled with zlib
>   support.

I think a problem is that this code is now serving such different uses.

Operating on compressed files transparently in file_fdw is obviously
useful, but why only gzip?  The gold standard is GNU tar, which can
operate on any compressed file in a variety of compression formats
without even having to specify an option.

Writing compressed COPY output files on the backend has limited uses, at
least none have been clearly explained, and the popen patch might
address those better.

Writing compressed COPY output on the frontend can already be done
differently.

Compression on the wire is a different debate and it probably shouldn't
be snuck in through this backdoor.

Putting compressed COPY output from the backend straight into a
compressed pg_dump file sounds interested, but this patch doesn't do
that yet, and I think there will be more issues to solve there.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl idempotent option

2013-01-15 Thread Tom Lane
Alvaro Herrera  writes:
> Vik Reykja escribió:
>> On Mon, Jan 14, 2013 at 4:22 PM, Tom Lane  wrote:
>>> Idempotent is a ten-dollar word.  Can we find something that average
>>> people wouldn't need to consult a dictionary to understand?

>> I disagree that we should dumb things down when the word means exactly what
>> we want and based on the rest of this thread is the only word or word
>> cluster that carries the desired meaning.

I'm not quite convinced that it means *exactly* what we want.  The
dictionary definition, according to my laptop's dictionary, is "denoting
an element of a set that is unchanged in value when multiplied or
otherwise operated on by itself".  I'm well aware that computer people
often use it to mean "an operation that doesn't change the system state
if the state is already what's wanted", but I think that's probably an
abuse of the mathematical usage.  And in any case, I'm not sure that
non-hackers would immediately recognize the term, nor be enlightened by
their dictionaries.  But ...

> Yeah, after seeing the alternatives, I agree that it seems okay,

... I was only asking if we *could* find a better choice.  Seems we
can't, so let's stick with this.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4

2013-01-15 Thread Tom Lane
Andres Freund  writes:
> On 2013-01-15 09:56:41 -0300, Alvaro Herrera wrote:
>> Oh, hm.  Maybe the contrib module's make installcheck, then?

> Thats what I do right now, but I really would prefer to have it checked
> during normal make checks, installchecks aren't run all that commonly :(

Sure they are, in every buildfarm cycle.  I don't see the problem.

(In the case of contrib, make installcheck is a whole lot faster than
make check, as well as being older.  So I don't really see why you
think it's less used.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER command reworks

2013-01-15 Thread Kohei KaiGai
2013/1/15 Alvaro Herrera :
> Kohei KaiGai escribió:
>
>> The attached patch is a rebased version towards the latest master
>> branch, and fix up the issue around error messages on name conflicts.
>
> I assume the lock.c changes are just a bollixed merge, right?
>
Yes, I'll check it and rebase it.

> I am not sure about some of the changes in the regression test expected
> output; are we really okay with losing the "in schema foo" part in some
> of these cases?
>
The changes in the expected results in regression test originated from
elimination of getObjectDescription, but "in schema foo" should be kept.
It looks like an obvious my mistake. Sorry.

Thanks,
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4

2013-01-15 Thread Andres Freund
On 2013-01-15 10:28:28 -0500, Tom Lane wrote:
> Andres Freund  writes:
> > On 2013-01-15 09:56:41 -0300, Alvaro Herrera wrote:
> >> Oh, hm.  Maybe the contrib module's make installcheck, then?
> 
> > Thats what I do right now, but I really would prefer to have it checked
> > during normal make checks, installchecks aren't run all that commonly :(
> 
> Sure they are, in every buildfarm cycle.  I don't see the problem.
> 
> (In the case of contrib, make installcheck is a whole lot faster than
> make check, as well as being older.  So I don't really see why you
> think it's less used.)

Oh. Because I was being dumb ;). And I admittedly never ran a buildfarm
animal so far.

But the other part of the problem is hiding in the unfortunately removed
part of the problem description - the tests require the non-default
options wal_level=logical and max_logical_slots=3+.
Is there a problem of making those the default in the buildfarm created
config?

I guess there would need to be an alternative output file for wal_level
< logical? Afaics there is no way to make a test conditional?

I shortly thought something like
DO $$
BEGIN
IF current_setting('wal_level') != 'df' THEN
   RAISE FATAL 'wal_level needs to be logical';
END IF;
END
$$;
could be used to avoid creating a huge second output file, but we can't
raise FATAL errors from plpgsql.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER command reworks

2013-01-15 Thread Alvaro Herrera
Kohei KaiGai escribió:
> 2013/1/15 Alvaro Herrera :
> > Kohei KaiGai escribió:
> >
> >> The attached patch is a rebased version towards the latest master
> >> branch, and fix up the issue around error messages on name conflicts.
> >
> > I assume the lock.c changes are just a bollixed merge, right?
> >
> Yes, I'll check it and rebase it.

Wait for a bit before publishing a new version -- I'm going to push the
other patch so that you can merge on top.

Note that I'm going to commit a new function like this:

/*
 * Raise an error to the effect that an object of the given name is already
 * present in the given namespace.
 */
static void
report_namespace_conflict(Oid classId, const char *name, Oid nspOid)
{
char   *msgfmt;

Assert(OidIsValid(nspOid));


For this patch you will need to create a separate function which does
the conflicting-name report for objects that are not in a namespace.
Mixing both in-schema and schemaless objects in the same report function
seems messy to me.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count(*) of zero rows returns 1

2013-01-15 Thread Tom Lane
Gurjeet Singh  writes:
> On Mon, Jan 14, 2013 at 4:15 PM, Tom Lane  wrote:
>> David Johnston  writes:
>>> SELECT *;
>>> Results in:
>>> SQL Error: ERROR:  SELECT * with no tables specified is not valid

> Interesting to note that SELECT * FROM table_with_zero_cols does not
> complain of anything.

Yeah.  IIRC that error check long predates our support for zero-column
tables (which we allowed when we introduced ALTER TABLE DROP COLUMN,
so that dropping the only column doesn't have to be an error case).
You could argue that the existence of zero-column tables makes the
error check inconsistent, but I think it's probably good as-is.  The
possibility that "*" can sometimes validly expand to no columns doesn't
mean that "SELECT *;" isn't almost surely a typo.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl idempotent option

2013-01-15 Thread Peter Eisentraut
On 1/14/13 10:22 AM, Tom Lane wrote:
> Also it appears to me that the hunk at lines 812ff is changing the
> default behavior, which is not what the patch is advertised to do.

True, I had forgotten to mention that.

Since it's already the behavior for start, another option would be to
just make it the default for stop as well and forget about the extra
options.  I'm not sure whether there is a big use case for getting an
error code on stop if the server is already stopped.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl idempotent option

2013-01-15 Thread Peter Eisentraut
On 1/14/13 10:22 AM, Tom Lane wrote:
> Idempotent is a ten-dollar word.  Can we find something that average
> people wouldn't need to consult a dictionary to understand?

My suggestion in the original thread was --oknodo, but people didn't
like that either.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] COPY .. COMPRESSED

2013-01-15 Thread Stephen Frost
* Peter Eisentraut (pete...@gmx.net) wrote:
> Operating on compressed files transparently in file_fdw is obviously
> useful, but why only gzip?

This isn't really an argument, imv.  It's only gzip *right this moment*
because that's all that I implemented.  I've already offered to add
bzip2 or whatever else people would like.

> The gold standard is GNU tar, which can
> operate on any compressed file in a variety of compression formats
> without even having to specify an option.

Yes, that's what I was hoping to get to, eventually.

> Writing compressed COPY output files on the backend has limited uses, at
> least none have been clearly explained, and the popen patch might
> address those better.

I do see value in the popen patch for server-side operations.

> Writing compressed COPY output on the frontend can already be done
> differently.

Certainly.  On a similar vein, I'm not convinced that the popen patch
for psql's \copy is really a great addition.

> Compression on the wire is a different debate and it probably shouldn't
> be snuck in through this backdoor.

Considering the COPY-COMPRESSED-to-FE piece is the vast majority of the
patch, I hope you understand that it certainly wasn't my intent to try
and 'sneak it in'.  Support for reading and writing compressed files
with COPY directly from the FE was one of my goals from the start on
this.

> Putting compressed COPY output from the backend straight into a
> compressed pg_dump file sounds interested, but this patch doesn't do
> that yet, and I think there will be more issues to solve there.

Let me just vent my dislike for the pg_dump code. :)  Probably half the
time spent on this overall patch was fighting with that to make it work
and it's actually about 90% of the way there, imv.  Getting the
compressed data into pg_dump is working in my local branch, going to a
directory-format dump output, but the custom format is causing me some
difficulties which I believe are related to the blocking that's used and
that the blocks coming off the wire were 'full-size', if you will,
instead of being chunked down to 4KB by the client-side compression.
I've simply not had time to debug it and fix it and wanted to get the
general patch out for discussion (which I'm glad that I did, given that
there's other work going on that's related).

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Patches for TODO item: Avoid truncating empty OCDR temp tables on COMMIT

2013-01-15 Thread Tom Lane
Gurjeet Singh  writes:
> On Mon, Jan 14, 2013 at 10:33 PM, Tom Lane  wrote:
>> I think this is unacceptable on its face.  It essentially supposes that
>> relcache entries are reliable storage.  They are not.

> Would it be acceptable if we inverted the meaning of the struct member, and
> named it to  rd_rows_not_inserted. When registering an ON COMMIT action, we
> can set this member to true, and set it to false when inserting a row into
> it. The pre-commit hook will truncate any relation that doesn't have this
> member set to true.

> With that in place, even if the relcache entry is discarded midway through
> the transaction, the cleanup code will truncate the relation, preserving
> the correct behaviour.

Well, that would fail in the safe direction, but it just seems
excessively ugly and hard-to-understand.  Given the field demand for
this optimization (which so far as I've noticed is nil), I'm not
convinced we need to do this.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Curious buildfarm failures

2013-01-15 Thread Andres Freund
On 2013-01-14 22:56:47 +0100, Andres Freund wrote:
> On 2013-01-14 22:50:16 +0100, Andres Freund wrote:
> > On 2013-01-14 16:35:48 -0500, Tom Lane wrote:
> > > Since commit 2065dd2834e832eb820f1fbcd16746d6af1f6037, there have been
> > > a few buildfarm failures along the lines of
> > >
> > >   -- Commit table drop
> > >   COMMIT PREPARED 'regress-two';
> > > ! PANIC:  failed to re-find shared proclock object
> > > ! PANIC:  failed to re-find shared proclock object
> > > ! connection to server was lost
> > >
> > > Evidently I bollixed something, but what?  I've been unable to reproduce
> > > this locally so far.  Anybody see what's wrong?
> > >
> > > Another thing is that dugong has been reproducibly failing with
> > >
> > >  drop cascades to table testschema.atable
> > >   -- Should succeed
> > >   DROP TABLESPACE testspace;
> > > + ERROR:  tablespace "testspace" is not empty
> > >
> > > since the elog-doesn't-return patch (b853eb97) went in.  Maybe this is
> > > some local problem there but I'm suspicious that there's a connection.
> > > But what?
> > >
> > > Any insights out there?
> >
> > It also has:
> >
> > FATAL:  could not open file "base/16384/28182": No such file or directory
> > CONTEXT:  writing block 6 of relation base/16384/28182
> > TRAP: FailedAssertion("!(PrivateRefCount[i] == 0)", File: "bufmgr.c", Line: 
> > 1743)
> 
> > #3  0x40b4b510 in ExceptionalCondition (
> > conditionName=0x40d76390 "!(PrivateRefCount[i] == 0)",
> > errorType=0x40d06500 "FailedAssertion",
> > fileName=0x40d76260 "bufmgr.c", lineNumber=1743) at assert.c:54
> > #4  0x407a7d20 in AtProcExit_Buffers (code=1, arg=59) at 
> > bufmgr.c:1743
> > #5  0x407c4e50 in shmem_exit (code=1) at ipc.c:221
> >
> > in the log. So it seems like it also could be related to locking
> > changes although I don't immediately see why.
> 
> No such "luck" though, the animal only applied the elog commits:
> http://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=dugong&dt=2013-01-14%2000%3A00%3A02&stg=SCM-checkout

Do you have idea whats going on? I don't really have any clue other than
guessing it might be an compiler bug.

Could the buildfarm owner perhaps tell us which files are left in the
tablespace 'testspace'?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl idempotent option

2013-01-15 Thread Tom Lane
Peter Eisentraut  writes:
> On 1/14/13 10:22 AM, Tom Lane wrote:
>> Also it appears to me that the hunk at lines 812ff is changing the
>> default behavior, which is not what the patch is advertised to do.

> True, I had forgotten to mention that.

> Since it's already the behavior for start, another option would be to
> just make it the default for stop as well and forget about the extra
> options.  I'm not sure whether there is a big use case for getting an
> error code on stop if the server is already stopped.

Actually, I seem to recall having had to hack Red Hat's initscript
because the LSB standard requires that stopping a not-running server
*not* be an error.  So +1 for forgetting about the option entirely
and just making it idempotent all the time.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] COPY .. COMPRESSED

2013-01-15 Thread Stephen Frost
* Claudio Freire (klaussfre...@gmail.com) wrote:
> Postgres' packetized protocol lends itself quite well for this kind of
> thing. It could even be done on a packet-by-packet basis. The only
> drawback I see, is that it pretty much rules out piping through
> arbitrary commands (a protocol needs to be very clearly defined).

Actually, wouldn't PG's packet-based protocol be exactly the wrong way
to do any kind of good on-the-wire compression?  You certainly wouldn't
want to waste time compressing small packets, such as a single INSERT
command or similar, as you'll always have to send a packet out anyway.
Even doing it at the protocol level with something ssl-like, where
you wrap the entire connection, wouldn't help if the client has a
process along the lines of:

send INSERT command
wait for response
send INSERT command
wait for response
..
..

Since you'd still have to flush after each small bit of data.

Where it does work well is when you move into a bulk-data mode (ala
COPY) and can compress relatively large amounts of data into a smaller
number of full-size packets to be sent.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] logical changeset generation v4

2013-01-15 Thread Tom Lane
Andres Freund  writes:
> But the other part of the problem is hiding in the unfortunately removed
> part of the problem description - the tests require the non-default
> options wal_level=logical and max_logical_slots=3+.

Oh.  Well, that's not going to work.

> Is there a problem of making those the default in the buildfarm created
> config?

Even if we hacked the buildfarm script to do so, it'd be a nonstarter
because it would cause ordinary manual "make installcheck" to fail.

I think the only reasonable way to handle this would be to (1) make
"make installcheck" a no-op in this contrib module, and (2) make
"make check" work, being careful to start the test postmaster with
the necessary options.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Curious buildfarm failures

2013-01-15 Thread Tom Lane
Andres Freund  writes:
>>> On 2013-01-14 16:35:48 -0500, Tom Lane wrote:
 Another thing is that dugong has been reproducibly failing with
 
 drop cascades to table testschema.atable
 -- Should succeed
 DROP TABLESPACE testspace;
 + ERROR:  tablespace "testspace" is not empty
 
 since the elog-doesn't-return patch (b853eb97) went in.  Maybe this is
 some local problem there but I'm suspicious that there's a connection.
 But what?

> Do you have idea whats going on? I don't really have any clue other than
> guessing it might be an compiler bug.

I'm suspicious of that too, but it's hard to see why it would manifest
like this while everything else appears to be fine.  A codegen bug
triggered by elog ought to show up in a lot of places, one would think.

> Could the buildfarm owner perhaps tell us which files are left in the
> tablespace 'testspace'?

I'm betting the answer is "none", and that what's happening is some sort
of timing problem (ie, the DROP TABLESPACE somehow isn't waiting for the
checkpointer process to clean out all the just-dropped files).  The
PANIC at shutdown looks like it might be some sort of doppelganger of
the same bug, ie dropped table cleaned out too early.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4

2013-01-15 Thread Andres Freund
On 2013-01-15 11:10:22 -0500, Tom Lane wrote:
> Andres Freund  writes:
> > But the other part of the problem is hiding in the unfortunately removed
> > part of the problem description - the tests require the non-default
> > options wal_level=logical and max_logical_slots=3+.
> 
> Oh.  Well, that's not going to work.

An alternative would be to have max_logical_slots default to a low value
and make the amount of logged information a wal_level independent
GUC that can be changed on the fly.
ISTM that that would result in too complicated code to deal with other
backends not having the same notion of that value and such, but its
possible.

> > Is there a problem of making those the default in the buildfarm created
> > config?
> 
> Even if we hacked the buildfarm script to do so, it'd be a nonstarter
> because it would cause ordinary manual "make installcheck" to fail.

I thought we could have a second expected file for that case. Not nice
:(

> I think the only reasonable way to handle this would be to (1) make
> "make installcheck" a no-op in this contrib module, and (2) make
> "make check" work, being careful to start the test postmaster with
> the necessary options.

Youre talking about adding a contrib-module specific make check or
changing the normal make check's wal_level?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Curious buildfarm failures

2013-01-15 Thread Andrew Dunstan


On 01/15/2013 11:04 AM, Andres Freund wrote:
Could the buildfarm owner perhaps tell us which files are left in the 
tablespace 'testspace'?



They will not be able to easily - the workspace is normally cleared out 
at the end of each run.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Curious buildfarm failures

2013-01-15 Thread Andres Freund
On 2013-01-15 11:19:28 -0500, Tom Lane wrote:
> Andres Freund  writes:
> >>> On 2013-01-14 16:35:48 -0500, Tom Lane wrote:
>  Another thing is that dugong has been reproducibly failing with
>  
>  drop cascades to table testschema.atable
>  -- Should succeed
>  DROP TABLESPACE testspace;
>  + ERROR:  tablespace "testspace" is not empty
>  
>  since the elog-doesn't-return patch (b853eb97) went in.  Maybe this is
>  some local problem there but I'm suspicious that there's a connection.
>  But what?
> 
> > Do you have idea whats going on? I don't really have any clue other than
> > guessing it might be an compiler bug.
> 
> I'm suspicious of that too, but it's hard to see why it would manifest
> like this while everything else appears to be fine.  A codegen bug
> triggered by elog ought to show up in a lot of places, one would think.

The make output showed that for some files optimization were disabled by
the compiler because they were to complex. Its possible that it is
related to that :(.

Interestingly the compiler couldn't deduce that
e.g. DateTimeParseError() didn't return with the old ereport definition
but it can with the new one which seems strange.

> > Could the buildfarm owner perhaps tell us which files are left in the
> > tablespace 'testspace'?
> 
> I'm betting the answer is "none", and that what's happening is some sort
> of timing problem (ie, the DROP TABLESPACE somehow isn't waiting for the
> checkpointer process to clean out all the just-dropped files).  The
> PANIC at shutdown looks like it might be some sort of doppelganger of
> the same bug, ie dropped table cleaned out too early.

It seems strange tho that it started failing exactly with that commit
and all runs failed exactly the same way since.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4

2013-01-15 Thread Tom Lane
Andres Freund  writes:
> On 2013-01-15 11:10:22 -0500, Tom Lane wrote:
>> I think the only reasonable way to handle this would be to (1) make
>> "make installcheck" a no-op in this contrib module, and (2) make
>> "make check" work, being careful to start the test postmaster with
>> the necessary options.

> Youre talking about adding a contrib-module specific make check or
> changing the normal make check's wal_level?

This contrib module's "make check" would change the wal_level.  Global
change no good for any number of reasons, the most obvious being that
we want to be able to test other wal_levels too.

I'm not sure whether the "make check" infrastructure currently supports
passing arguments through to the test postmaster's command line, but it
shouldn't be terribly hard to add if not.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Curious buildfarm failures

2013-01-15 Thread Tom Lane
Andres Freund  writes:
> Interestingly the compiler couldn't deduce that
> e.g. DateTimeParseError() didn't return with the old ereport definition
> but it can with the new one which seems strange.

Oooh, I hadn't noticed that.  I guess that must indicate that this
version of icc can deduce unreachability from

if (true)
abort();

but not from

(true) ? abort() : (void) 0;

which is a bit odd but not incredible.  (I had actually wondered while
working on the patch if this wording might be better for some compilers;
seems that's the case.)

What that means is that this compiler was not previously aware that
either ereport(ERROR) or elog(ERROR) doesn't return, but it now knows
that for both.  So that greatly expands the scope of places where
behavior might have changed.  Doesn't simplify our task :-(

>> I'm betting the answer is "none", and that what's happening is some sort
>> of timing problem (ie, the DROP TABLESPACE somehow isn't waiting for the
>> checkpointer process to clean out all the just-dropped files).

> It seems strange tho that it started failing exactly with that commit
> and all runs failed exactly the same way since.

Well, it could be quite reproducible, if for example what's happening is
that the DROP is failing to wait for the checkpointer at all.

Is there a way to enable log_checkpoints during a buildfarm run?
It'd be good to get timestamps added to the postmaster log entries, too.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Curious buildfarm failures

2013-01-15 Thread Andrew Dunstan


On 01/15/2013 11:57 AM, Tom Lane wrote:

Well, it could be quite reproducible, if for example what's happening is
that the DROP is failing to wait for the checkpointer at all.

Is there a way to enable log_checkpoints during a buildfarm run?
It'd be good to get timestamps added to the postmaster log entries, too.





Yes, it's very easy. In the config file, do something like:

 extra_config =>
 {
 DEFAULT => [
  q(log_line_prefix = '%t [%c:%l] '),
  "log_checkpoints = 'true',
  ],
 },



cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Curious buildfarm failures

2013-01-15 Thread Andrew Dunstan


On 01/15/2013 12:07 PM, Andrew Dunstan wrote:


On 01/15/2013 11:57 AM, Tom Lane wrote:

Well, it could be quite reproducible, if for example what's happening is
that the DROP is failing to wait for the checkpointer at all.

Is there a way to enable log_checkpoints during a buildfarm run?
It'd be good to get timestamps added to the postmaster log entries, too.





Yes, it's very easy. In the config file, do something like:


I had a missing quote. should be:

   extra_config =>
 {
 DEFAULT => [
  q(log_line_prefix = '%t [%c:%l] '),
  "log_checkpoints = 'true'",
  ],
 },

cheers


andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] COPY .. COMPRESSED

2013-01-15 Thread Claudio Freire
On Tue, Jan 15, 2013 at 1:08 PM, Stephen Frost  wrote:
> Where it does work well is when you move into a bulk-data mode (ala
> COPY) and can compress relatively large amounts of data into a smaller
> number of full-size packets to be sent.

Well... exactly. COPY is one case, big result sets is another.
And packet headers can include whether each packet is compressed or
not, which is quite transparent and easy to handle. There could even
be a negotiation phase and make it backwards-compatible.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] COPY .. COMPRESSED

2013-01-15 Thread Garick Hamlin
On Tue, Jan 15, 2013 at 01:35:57PM +, Greg Stark wrote:
> On Tue, Jan 15, 2013 at 2:33 AM, Stephen Frost  wrote:
> > * Peter Eisentraut (pete...@gmx.net) wrote:
> >> On 1/14/13 11:28 AM, Stephen Frost wrote:
> >> > While there is no option currently for having the server do the
> >> > compression before sending the data over the wire.
> >>
> >> OpenSSL?
> >
> > To be honest, I expected that to come up earlier in this discussion.
> > It'd be redundant to use OpenSSL for compression and then ALSO do
> > compression on the client side to save into a custom format dump.
> 
> For what it's worth there was a security announcement not long ago
> that made OpenSSL disable compression in streams by default. I'm not
> sure if it's relevant to Postgres or not.

It's an interesting question.  It might be.  I thought at first it
wouldn't be relevant, but on reflection it is.

This attack is called the CRIME attack.  This class of attacks stem from reuse
of a dictionary across some sort of confidentiality boundary.  The attacker
looks at the traffic and notices 'how big' the network response is.   This
tells the attacker the compressor has seen already seem the text.  

So imagine, I have a website and I keep session cookies, user names, and 
password crypts in a database.  Imagine the session key is a long 
hexidecimal number.  As an attacker, I could send in the username field 
of the login form guesses of the prefix of somebody else's session key 
I peek at the encrypted traffic from script to the database.  As the 
prefix match length increases the database reply gets shorter.  
Essentially, its a side channel attack that ends up reducing guessing to
a radix search by prefix of all valid session keys.  In this attack, I 
don't have access to the database except through the web form, but I
can see the encrypted database traffic.

It is not a huge vulnerability, but yeah in some use cases if postgresql
used compression it might provide a difficult, but possible route.

Garick


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Curious buildfarm failures (fwd)

2013-01-15 Thread Sergey Koposov

Hi,


Date: Tue, 15 Jan 2013 11:57:07 -0500
From: Tom Lane 
To: Andres Freund 
Cc: m...@sai.msu.ru, pgsql-hackers@postgreSQL.org,
   Andrew Dunstan 
Subject: Re: Curious buildfarm failures

Well, it could be quite reproducible, if for example what's happening is
that the DROP is failing to wait for the checkpointer at all.

Is there a way to enable log_checkpoints during a buildfarm run?
It'd be good to get timestamps added to the postmaster log entries, too.


Here is the log output from the failing pg_regress after enabling checkpoints 
and timestamps:

2013-01-15 21:20:19 MSK [50f58fd3.589e:1] LOG:  database system was shut down 
at 2013-01-15 21:20:19 MS
K
2013-01-15 21:20:19 MSK [50f58fd3.58a2:1] LOG:  autovacuum launcher started
2013-01-15 21:20:19 MSK [50f58fd3.5899:1] LOG:  database system is ready to 
accept connections
2013-01-15 21:20:20 MSK [50f58fd3.589f:1] LOG:  checkpoint starting: immediate 
force wait
2013-01-15 21:20:21 MSK [50f58fd3.589f:2] LOG:  checkpoint complete: wrote 3 
buffers (0.0%); 0 transact
ion log file(s) added, 0 removed, 0 recycled; write=0.604 s, sync=0.000 s, 
total=0.605 s; sync files=0,
 longest=0.000 s, average=0.000 s
2013-01-15 21:20:21 MSK [50f58fd3.589f:3] LOG:  checkpoint starting: immediate 
force wait
2013-01-15 21:20:21 MSK [50f58fd3.589f:4] LOG:  checkpoint complete: wrote 0 
buffers (0.0%); 0 transact
ion log file(s) added, 0 removed, 0 recycled; write=0.000 s, sync=0.000 s, 
total=0.000 s; sync files=0,
 longest=0.000 s, average=0.000 s
2013-01-15 21:20:21 MSK [50f58fd5.58ac:1] ERROR:  unrecognized parameter 
"some_nonexistent_parameter"
2013-01-15 21:20:21 MSK [50f58fd5.58ac:2] STATEMENT:  ALTER TABLESPACE 
testspace SET (some_nonexistent_
parameter = true);
2013-01-15 21:20:21 MSK [50f58fd5.58ac:3] ERROR:  RESET must not include values 
for parameters
2013-01-15 21:20:21 MSK [50f58fd5.58ac:4] STATEMENT:  ALTER TABLESPACE 
testspace RESET (random_page_cos
t = 2.0);
2013-01-15 21:20:21 MSK [50f58fd5.58ac:5] ERROR:  duplicate key value violates 
unique constraint "anind
ex"
2013-01-15 21:20:21 MSK [50f58fd5.58ac:6] DETAIL:  Key (column1)=(1) already 
exists.
2013-01-15 21:20:21 MSK [50f58fd5.58ac:7] STATEMENT:  INSERT INTO 
testschema.atable VALUES(1);
2013-01-15 21:20:21 MSK [50f58fd5.58ac:8] ERROR:  directory "/no/such/location" 
does not exist
2013-01-15 21:20:21 MSK [50f58fd5.58ac:9] STATEMENT:  CREATE TABLESPACE 
badspace LOCATION '/no/such/loc
ation';
2013-01-15 21:20:21 MSK [50f58fd5.58ac:10] ERROR:  tablespace "nosuchspace" 
does not exist
2013-01-15 21:20:21 MSK [50f58fd5.58ac:11] STATEMENT:  CREATE TABLE bar (i int) 
TABLESPACE nosuchspace;
2013-01-15 21:20:21 MSK [50f58fd3.589f:5] LOG:  checkpoint starting: immediate 
force wait
2013-01-15 21:20:21 MSK [50f58fd3.589f:6] LOG:  checkpoint complete: wrote 37 
buffers (0.2%); 0 transac
tion log file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.000 s, 
total=0.001 s; sync files=0
, longest=0.000 s, average=0.000 s
2013-01-15 21:20:21 MSK [50f58fd5.58ac:12] ERROR:  tablespace "testspace" is 
not empty
2013-01-15 21:20:21 MSK [50f58fd5.58ac:13] STATEMENT:  DROP TABLESPACE 
testspace;
2013-01-15 21:20:21 MSK [50f58fd3.589f:7] LOG:  checkpoint starting: immediate 
force wait
2013-01-15 21:20:21 MSK [50f58fd3.589f:8] LOG:  checkpoint complete: wrote 9 
buffers (0.1%); 0 transact
ion log file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.000 s, 
total=0.001 s; sync files=0,
 longest=0.000 s, average=0.000 s
2013-01-15 21:20:21 MSK [50f58fd5.58ac:14] ERROR:  tablespace "testspace" is 
not empty
2013-01-15 21:20:21 MSK [50f58fd5.58ac:15] STATEMENT:  DROP TABLESPACE 
testspace;


And I do see the tblspc file left after the finish of "make check":
tmp_check/data/pg_tblspc/16385/PG_9.3_201212081/16384/16387

Cheers,
S

PS I wouldn't be surprised that it is a compiler bug though. But I did see the 
failure with newer icc as well.


*
Sergey E. Koposov, PhD, Research Associate
Institute of Astronomy, University of Cambridge
Madingley road, CB3 0HA, Cambridge, UK
Tel: +44-1223-337-551 Web: http://www.ast.cam.ac.uk/~koposov/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] COPY .. COMPRESSED

2013-01-15 Thread Stephen Frost
* Claudio Freire (klaussfre...@gmail.com) wrote:
> On Tue, Jan 15, 2013 at 1:08 PM, Stephen Frost  wrote:
> > Where it does work well is when you move into a bulk-data mode (ala
> > COPY) and can compress relatively large amounts of data into a smaller
> > number of full-size packets to be sent.
> 
> Well... exactly. COPY is one case, big result sets is another.
> And packet headers can include whether each packet is compressed or
> not, which is quite transparent and easy to handle. There could even
> be a negotiation phase and make it backwards-compatible.

COPY and a large result set are the only cases, and a large result set
could easily be put inside of a COPY statement.  I agree that large
result sets outside of COPY could benefit from compression and perhaps
we can formulate a way to support that also.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] unlogged tables vs. GIST

2013-01-15 Thread Heikki Linnakangas

On 15.01.2013 08:54, Jeevan Chalke wrote:

For (2), I have added a new function called, GetXLogRecPtrForUnloogedRel()
which returns a fake LSN for GiST indexes. However, I have removed
GetXLogRecPtrForTemp() function and added its functionality inside this new
function itself to avoid complexity.


I don't much care for using a new field in the control file for this. 
First, it seems like a big modularity violation to store a gist-specific 
counter in the control file. Second, you'd be generating a lot of 
traffic on the ControlFileLock. It's not heavily contended at the 
moment, but when the control file is updated, it's held over an fsync, 
which could cause unnecessary stalls to insertions to unlogged gist 
tables. And it's just a bad idea to share a lock for two things with 
completely different characteristics in general.


Could we stash the counter e.g. in the root page of the index?

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4

2013-01-15 Thread Alvaro Herrera
Andres Freund wrote:
> On 2013-01-15 09:56:41 -0300, Alvaro Herrera wrote:
> > Andres Freund wrote:
> > > On 2013-01-15 01:00:00 -0300, Alvaro Herrera wrote:
> > > > Andres Freund wrote:
> > > > 
> > > > I've been giving a couple of these parts a look.  In particular
> > > > 
> > > > > [03] Split out xlog reading into its own module called xlogreader
> > > > 
> > > > Cleaned this one up a bit last week.  I will polish it some more,
> > > > publish for some final comments, and commit.
> > > 
> > > I have some smaller bugfixes in my current version that you probably
> > > don't have yet (on grounds of being fixed this weekend)... So we need to
> > > be a bit careful not too loose those.
> > 
> > Sure.  Do you have them as individual commits?  I'm assuming you rebased
> > the tree.  Maybe in your reflog?  IIRC I also have at least one minor
> > bug fix.
> 
> I can check, which commit did you base your modifications on?

Dunno.  It's probably easier to reverse-apply the version you submitted
to see what changed, and then forward-apply again.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] recent ALTER whatever .. SET SCHEMA refactoring

2013-01-15 Thread Alvaro Herrera
Robert Haas escribió:
> The recent SET SCHEMA refactoring has changed the error message that
> you get when trying to move a function into the schema that already
> contains it.

I have committed 7ac5760fa2 which should fix this.  Thanks for the
report.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Teaching pg_receivexlog to follow timeline switches

2013-01-15 Thread Fujii Masao
On Tue, Jan 15, 2013 at 11:05 PM, Heikki Linnakangas
 wrote:
> Now that a standby server can follow timeline switches through streaming
> replication, we should do teach pg_receivexlog to do the same. Patch
> attached.
>
> I made one change to the way START_STREAMING command works, to better
> support this. When a standby server reaches the timeline it's streaming from
> the master, it stops streaming, fetches any missing timeline history files,
> and parses the history file of the latest timeline to figure out where to
> continue. However, I don't want to parse timeline history files in
> pg_receivexlog. Better to keep it simple. So instead, I modified the
> server-side code for START_STREAMING to return the next timeline's ID at the
> end, and used that in pg_receivexlog. I also modifed BASE_BACKUP to return
> not only the start XLogRecPtr, but also the corresponding timeline ID.
> Otherwise we might try to start streaming from wrong timeline if you issue a
> BASE_BACKUP at the same moment the server switches to a new timeline.
>
> When pg_receivexlog switches timeline, what to do with the partial file on
> the old timeline? When the timeline changes in the middle of a WAL segment,
> the segment old the old timeline is only half-filled. For example, when
> timeline changes from 1 to 2, you'll have this in pg_xlog:
>
> 00010006
> 00010007
> 00010008
> 00020008
> 0002.history
>
> The segment 00010008 is only half-filled, as the timeline
> changed in the middle of that segment. The beginning portion of that file is
> duplicated in 00020008, with the timeline-changing
> checkpoint record right after the duplicated portion.
>
> When we stream that with pg_receivexlog, and hit the timeline switch, we'll
> have this situation in the client:
>
> 00010006
> 00010007
> 00010008.partial
>
> What to do with the partial file? One option is to rename it to
> 00010008. However, if you then kill pg_receivexlog before it
> has finished streaming a full segment from the new timeline, on restart it
> will try to begin streaming WAL segment 00010009, because it
> sees that segment 00010008 is already completed. That'd be
> wrong.

Can't we rename .partial file safely after we receive a full segment
of the WAL file
with new timeline and the same logid/segmentid?

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v4

2013-01-15 Thread Andres Freund
On 2013-01-15 15:16:44 -0300, Alvaro Herrera wrote:
> Andres Freund wrote:
> > On 2013-01-15 09:56:41 -0300, Alvaro Herrera wrote:
> > > Andres Freund wrote:
> > > > On 2013-01-15 01:00:00 -0300, Alvaro Herrera wrote:
> > > > > Andres Freund wrote:
> > > > > 
> > > > > I've been giving a couple of these parts a look.  In particular
> > > > > 
> > > > > > [03] Split out xlog reading into its own module called xlogreader
> > > > > 
> > > > > Cleaned this one up a bit last week.  I will polish it some more,
> > > > > publish for some final comments, and commit.
> > > > 
> > > > I have some smaller bugfixes in my current version that you probably
> > > > don't have yet (on grounds of being fixed this weekend)... So we need to
> > > > be a bit careful not too loose those.
> > > 
> > > Sure.  Do you have them as individual commits?  I'm assuming you rebased
> > > the tree.  Maybe in your reflog?  IIRC I also have at least one minor
> > > bug fix.
> > 
> > I can check, which commit did you base your modifications on?
> 
> Dunno.  It's probably easier to reverse-apply the version you submitted
> to see what changed, and then forward-apply again.

There's at least the two attached patches...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
>From 5ca4b81f03bd7a4bf5101bd68811548023ac12fe Mon Sep 17 00:00:00 2001
From: Andres Freund 
Date: Mon, 14 Jan 2013 21:43:13 +0100
Subject: [PATCH] xlogreader: fix

---
 src/backend/access/transam/xlogreader.c | 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index 6a420e6..9439c05 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -350,7 +350,7 @@ XLogReadRecord(XLogReaderState *state, XLogRecPtr RecPtr, char **errormsg)
 
 			/* Wait for the next page to become available */
 			readOff = ReadPageInternal(state, targetPagePtr,
-	   Min(len, XLOG_BLCKSZ));
+	   Min(total_len - gotlen + SizeOfXLogShortPHD, XLOG_BLCKSZ));
 
 			if (readOff < 0)
 goto err;
@@ -383,6 +383,11 @@ XLogReadRecord(XLogReaderState *state, XLogRecPtr RecPtr, char **errormsg)
 
 			/* Append the continuation from this page to the buffer */
 			pageHeaderSize = XLogPageHeaderSize(pageHeader);
+
+			if (readOff < pageHeaderSize)
+readOff = ReadPageInternal(state, targetPagePtr,
+		   pageHeaderSize);
+
 			Assert(pageHeaderSize <= readOff);
 
 			contdata = (char *) state->readBuf + pageHeaderSize;
@@ -390,6 +395,10 @@ XLogReadRecord(XLogReaderState *state, XLogRecPtr RecPtr, char **errormsg)
 			if (pageHeader->xlp_rem_len < len)
 len = pageHeader->xlp_rem_len;
 
+			if (readOff < (pageHeaderSize + len))
+readOff = ReadPageInternal(state, targetPagePtr,
+		   pageHeaderSize + len);
+
 			memcpy(buffer, (char *) contdata, len);
 			buffer += len;
 			gotlen += len;
-- 
1.7.12.289.g0ce9864.dirty

>From 995d723239df325b48412878fa818c94cb33f724 Mon Sep 17 00:00:00 2001
From: Andres Freund 
Date: Tue, 15 Jan 2013 00:58:49 +0100
Subject: [PATCH] xlogreader: use correct type

---
 src/backend/access/transam/xlogreader.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/backend/access/transam/xlogreader.c b/src/backend/access/transam/xlogreader.c
index 9439c05..f2b9355 100644
--- a/src/backend/access/transam/xlogreader.c
+++ b/src/backend/access/transam/xlogreader.c
@@ -927,7 +927,7 @@ XLogFindNextRecord(XLogReaderState *state, XLogRecPtr RecPtr)
uint32  pageHeaderSize;
XLogPageHeader header;
XLogRecord *record;
-   uint32 readLen;
+   int readLen;
char   *errormsg;
 
if (RecPtr == InvalidXLogRecPtr)
-- 
1.7.12.289.g0ce9864.dirty


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unlogged tables vs. GIST

2013-01-15 Thread Robert Haas
On Tue, Jan 15, 2013 at 1:10 PM, Heikki Linnakangas
 wrote:
> On 15.01.2013 08:54, Jeevan Chalke wrote:
>>
>> For (2), I have added a new function called, GetXLogRecPtrForUnloogedRel()
>> which returns a fake LSN for GiST indexes. However, I have removed
>> GetXLogRecPtrForTemp() function and added its functionality inside this
>> new
>> function itself to avoid complexity.
>
>
> I don't much care for using a new field in the control file for this. First,
> it seems like a big modularity violation to store a gist-specific counter in
> the control file. Second, you'd be generating a lot of traffic on the
> ControlFileLock. It's not heavily contended at the moment, but when the
> control file is updated, it's held over an fsync, which could cause
> unnecessary stalls to insertions to unlogged gist tables. And it's just a
> bad idea to share a lock for two things with completely different
> characteristics in general.
>
> Could we stash the counter e.g. in the root page of the index?

That would require maintaining a counter per table rather than a
single global counter, which would be bad because then we'd need to
store one counter in shared memory for every table, rather than just
one, period, which runs up against the fixed sizing of shared memory.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unlogged tables vs. GIST

2013-01-15 Thread Heikki Linnakangas

On 15.01.2013 20:33, Robert Haas wrote:

On Tue, Jan 15, 2013 at 1:10 PM, Heikki Linnakangas

Could we stash the counter e.g. in the root page of the index?


That would require maintaining a counter per table rather than a
single global counter, which would be bad because then we'd need to
store one counter in shared memory for every table, rather than just
one, period, which runs up against the fixed sizing of shared memory.


I was thinking of just adding a new field to the root page header, and 
use that field as the counter. Something like:


XLogRecPtr
GetXLogRecPtrForTemp(void)
{
rootbuf = ReadBuffer(rel, GIST_ROOT_BLKNO);
opaq = GistPageGetOpaque(BufferGetPage(rootbuf));

LockBuffer(rootbuf, GIST_EXCLUSIVE);
nsn = opaq->counter++
UnlockReleaseBuffer(rootbuf)
return nsn;
}

or perhaps we need to use locking mechanism for that, like just a new 
global lwlock or spinlock, to avoid deadlocks if someone is just 
splitting the root page. In any case, the fixed-sizedness of shared 
memory isn't an issue here.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unlogged tables vs. GIST

2013-01-15 Thread Tom Lane
Robert Haas  writes:
> On Tue, Jan 15, 2013 at 1:10 PM, Heikki Linnakangas
>  wrote:
>> Could we stash the counter e.g. in the root page of the index?

> That would require maintaining a counter per table rather than a
> single global counter, which would be bad because then we'd need to
> store one counter in shared memory for every table, rather than just
> one, period, which runs up against the fixed sizing of shared memory.

I think what Heikki had in mind was that the copy in the index would be
the authoritative one, not some image in shared memory.  This'd imply
dirtying the root page on every insert, as well as increased contention
for the root page, so it might have performance problems.

I think a bigger issue is where we'd find any space for it.  There's no
easily-spare space in a GIST page.  This reminds me again that the lack
of a metapage in GIST was a serious design error, which we should
correct if we ever break on-disk compatibility again.

I concur that adding such a counter to pg_control is a nonstarter,
though.

Given that we don't need crash recovery for an unlogged table, could
we get away with some variant of NSN that has weaker semantics than
XLOG LSNs?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patches for TODO item: Avoid truncating empty OCDR temp tables on COMMIT

2013-01-15 Thread Robert Haas
On Tue, Jan 15, 2013 at 10:57 AM, Tom Lane  wrote:
> Gurjeet Singh  writes:
>> On Mon, Jan 14, 2013 at 10:33 PM, Tom Lane  wrote:
>>> I think this is unacceptable on its face.  It essentially supposes that
>>> relcache entries are reliable storage.  They are not.
>
>> Would it be acceptable if we inverted the meaning of the struct member, and
>> named it to  rd_rows_not_inserted. When registering an ON COMMIT action, we
>> can set this member to true, and set it to false when inserting a row into
>> it. The pre-commit hook will truncate any relation that doesn't have this
>> member set to true.
>
>> With that in place, even if the relcache entry is discarded midway through
>> the transaction, the cleanup code will truncate the relation, preserving
>> the correct behaviour.
>
> Well, that would fail in the safe direction, but it just seems
> excessively ugly and hard-to-understand.  Given the field demand for
> this optimization (which so far as I've noticed is nil), I'm not
> convinced we need to do this.

Yep, this is also why I prefer the approach of setting a global
variable.  The demand for this is not *precisely* zero or it wouldn't
be on the TODO list, but the global seems like it would head off the
worst of the damage without requiring any fiddling with the relcache.

On the third hand, the fact that a table is OCDR is recorded in
backend-local storage somewhere, and that storage (unlike the
relcache) had better be reliable.  So maybe there's some way to
finesse it that way.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unlogged tables vs. GIST

2013-01-15 Thread Robert Haas
On Tue, Jan 15, 2013 at 1:48 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> On Tue, Jan 15, 2013 at 1:10 PM, Heikki Linnakangas
>>  wrote:
>>> Could we stash the counter e.g. in the root page of the index?
>
>> That would require maintaining a counter per table rather than a
>> single global counter, which would be bad because then we'd need to
>> store one counter in shared memory for every table, rather than just
>> one, period, which runs up against the fixed sizing of shared memory.
>
> I think what Heikki had in mind was that the copy in the index would be
> the authoritative one, not some image in shared memory.  This'd imply
> dirtying the root page on every insert, as well as increased contention
> for the root page, so it might have performance problems.
>
> I think a bigger issue is where we'd find any space for it.  There's no
> easily-spare space in a GIST page.  This reminds me again that the lack
> of a metapage in GIST was a serious design error, which we should
> correct if we ever break on-disk compatibility again.
>
> I concur that adding such a counter to pg_control is a nonstarter,
> though.
>
> Given that we don't need crash recovery for an unlogged table, could
> we get away with some variant of NSN that has weaker semantics than
> XLOG LSNs?

It needs to be strictly ascending and survive clean shutdowns.  Is
there some place we could preserve it other than the control file?

I was assuming we wanted a single sequence shared across all relations
rather than a sequence per relation, but I don't know of any reason
why that's actually required.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_retainxlog for inclusion in 9.3?

2013-01-15 Thread Robert Haas
On Tue, Jan 15, 2013 at 2:51 AM, Magnus Hagander  wrote:
> On Mon, Jan 14, 2013 at 5:56 PM, Robert Haas  wrote:
>> On Fri, Jan 4, 2013 at 4:55 PM, Dimitri Fontaine  
>> wrote:
>>> Robert Haas  writes:
 Mostly that it seems like a hack, and I suspect we may come up with a
 better way to do this in the future.
>>>
>>> Do you have the specs of such better way? Would it be a problem to have
>>> both pg_retainxlog and the new way?
>>
>> Well, I think in the long term we are likely to want the master to
>> have some kind of ability to track the positions of its slaves, even
>> when they are disconnected.  And, optionally, to retain the WAL that
>> they need, again even when they are disconnected.  If such an ability
>> materializes, this will be moot (even as I think that pg_standby is
>> now largely moot, at least for new installations, now that we have
>> standby_mode=on).
>
> I agree. But just as we had pg_standby for quite a while before we got
> standby_mode=on, I believe we should have pg_retainxlog (or something
> like it) until we have something more integrated.

Yep, not disagreeing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unlogged tables vs. GIST

2013-01-15 Thread Heikki Linnakangas

On 15.01.2013 20:48, Tom Lane wrote:

Robert Haas  writes:

On Tue, Jan 15, 2013 at 1:10 PM, Heikki Linnakangas
  wrote:

Could we stash the counter e.g. in the root page of the index?



That would require maintaining a counter per table rather than a
single global counter, which would be bad because then we'd need to
store one counter in shared memory for every table, rather than just
one, period, which runs up against the fixed sizing of shared memory.


I think what Heikki had in mind was that the copy in the index would be
the authoritative one, not some image in shared memory.  This'd imply
dirtying the root page on every insert, as well as increased contention
for the root page, so it might have performance problems.


Not every insert, just every split. Which might still be a performance 
problem, but an order of magnitude smaller.



I think a bigger issue is where we'd find any space for it.  There's no
easily-spare space in a GIST page.


We could use a larger opaque struct, with the additional field, for the 
root page, for new indexes. As long as we continue to support the 
current layout too, that won't break on-disk compatibility. We didn't 
support unlogged gist indexes before, so we won't have to worry about 
upgrading unlogged indexes that miss the field.


Or if 32 bits is enough for this, we could reuse the right-link. The 
root page has no right link, so it can be repurposed.



This reminds me again that the lack
of a metapage in GIST was a serious design error, which we should
correct if we ever break on-disk compatibility again.


Yeah.


I concur that adding such a counter to pg_control is a nonstarter,
though.

Given that we don't need crash recovery for an unlogged table, could
we get away with some variant of NSN that has weaker semantics than
XLOG LSNs?


One thought I had is that we only generate an NSN when a page is split, 
and gist never deletes pages, so how about simply using the block number 
of the newly split page as the NSN? That closes the chance of 
reinventing page recycling in the future, though.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Curious buildfarm failures (fwd)

2013-01-15 Thread Andres Freund
On 2013-01-15 17:27:50 +, Sergey Koposov wrote:
> Hi,
> 
> >Date: Tue, 15 Jan 2013 11:57:07 -0500
> >From: Tom Lane 
> >To: Andres Freund 
> >Cc: m...@sai.msu.ru, pgsql-hackers@postgreSQL.org,
> >   Andrew Dunstan 
> >Subject: Re: Curious buildfarm failures
> >
> >Well, it could be quite reproducible, if for example what's happening is
> >that the DROP is failing to wait for the checkpointer at all.
> >
> >Is there a way to enable log_checkpoints during a buildfarm run?
> >It'd be good to get timestamps added to the postmaster log entries, too.
> 
> Here is the log output from the failing pg_regress after enabling checkpoints 
> and timestamps:
> 
> 2013-01-15 21:20:19 MSK [50f58fd3.589e:1] LOG:  database system was shut down 
> at 2013-01-15 21:20:19 MS
> K
> 2013-01-15 21:20:19 MSK [50f58fd3.58a2:1] LOG:  autovacuum launcher started
> 2013-01-15 21:20:19 MSK [50f58fd3.5899:1] LOG:  database system is ready to 
> accept connections
> 2013-01-15 21:20:20 MSK [50f58fd3.589f:1] LOG:  checkpoint starting: 
> immediate force wait
> 2013-01-15 21:20:21 MSK [50f58fd3.589f:2] LOG:  checkpoint complete: wrote 3 
> buffers (0.0%); 0 transact
> ion log file(s) added, 0 removed, 0 recycled; write=0.604 s, sync=0.000 s, 
> total=0.605 s; sync files=0,
>  longest=0.000 s, average=0.000 s
> 2013-01-15 21:20:21 MSK [50f58fd3.589f:3] LOG:  checkpoint starting: 
> immediate force wait
> 2013-01-15 21:20:21 MSK [50f58fd3.589f:4] LOG:  checkpoint complete: wrote 0 
> buffers (0.0%); 0 transact
> ion log file(s) added, 0 removed, 0 recycled; write=0.000 s, sync=0.000 s, 
> total=0.000 s; sync files=0,
>  longest=0.000 s, average=0.000 s
> 2013-01-15 21:20:21 MSK [50f58fd5.58ac:1] ERROR:  unrecognized parameter 
> "some_nonexistent_parameter"
> 2013-01-15 21:20:21 MSK [50f58fd5.58ac:2] STATEMENT:  ALTER TABLESPACE 
> testspace SET (some_nonexistent_
> parameter = true);
> 2013-01-15 21:20:21 MSK [50f58fd5.58ac:3] ERROR:  RESET must not include 
> values for parameters
> 2013-01-15 21:20:21 MSK [50f58fd5.58ac:4] STATEMENT:  ALTER TABLESPACE 
> testspace RESET (random_page_cos
> t = 2.0);
> 2013-01-15 21:20:21 MSK [50f58fd5.58ac:5] ERROR:  duplicate key value 
> violates unique constraint "anind
> ex"
> 2013-01-15 21:20:21 MSK [50f58fd5.58ac:6] DETAIL:  Key (column1)=(1) already 
> exists.
> 2013-01-15 21:20:21 MSK [50f58fd5.58ac:7] STATEMENT:  INSERT INTO 
> testschema.atable VALUES(1);
> 2013-01-15 21:20:21 MSK [50f58fd5.58ac:8] ERROR:  directory 
> "/no/such/location" does not exist
> 2013-01-15 21:20:21 MSK [50f58fd5.58ac:9] STATEMENT:  CREATE TABLESPACE 
> badspace LOCATION '/no/such/loc
> ation';
> 2013-01-15 21:20:21 MSK [50f58fd5.58ac:10] ERROR:  tablespace "nosuchspace" 
> does not exist
> 2013-01-15 21:20:21 MSK [50f58fd5.58ac:11] STATEMENT:  CREATE TABLE bar (i 
> int) TABLESPACE nosuchspace;
> 2013-01-15 21:20:21 MSK [50f58fd3.589f:5] LOG:  checkpoint starting: 
> immediate force wait
> 2013-01-15 21:20:21 MSK [50f58fd3.589f:6] LOG:  checkpoint complete: wrote 37 
> buffers (0.2%); 0 transac
> tion log file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.000 s, 
> total=0.001 s; sync files=0
> , longest=0.000 s, average=0.000 s
> 2013-01-15 21:20:21 MSK [50f58fd5.58ac:12] ERROR:  tablespace "testspace" is 
> not empty
> 2013-01-15 21:20:21 MSK [50f58fd5.58ac:13] STATEMENT:  DROP TABLESPACE 
> testspace;
> 2013-01-15 21:20:21 MSK [50f58fd3.589f:7] LOG:  checkpoint starting: 
> immediate force wait
> 2013-01-15 21:20:21 MSK [50f58fd3.589f:8] LOG:  checkpoint complete: wrote 9 
> buffers (0.1%); 0 transact
> ion log file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.000 s, 
> total=0.001 s; sync files=0,
>  longest=0.000 s, average=0.000 s
> 2013-01-15 21:20:21 MSK [50f58fd5.58ac:14] ERROR:  tablespace "testspace" is 
> not empty
> 2013-01-15 21:20:21 MSK [50f58fd5.58ac:15] STATEMENT:  DROP TABLESPACE 
> testspace;
> 
> 
> And I do see the tblspc file left after the finish of "make check":
>   tmp_check/data/pg_tblspc/16385/PG_9.3_201212081/16384/16387
> 
> Cheers,
>   S
> 
> PS I wouldn't be surprised that it is a compiler bug though. But I did see
> the failure with newer icc as well.

Any chance you could run make check again but with log_statement=all and
log_min_messages=debug2? That might tell us a bit more, whether the
dependency code doesn't work right or whether the checkpoint is doing
strange things.

Thannks,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] json api WIP patch

2013-01-15 Thread David Fetter
On Mon, Jan 14, 2013 at 07:52:56PM -0500, Andrew Dunstan wrote:
> 
> On 01/14/2013 07:36 PM, Merlin Moncure wrote:
> >While testing this I noticed that integer based 'get' routines are
> >zero based -- was this intentional?  Virtually all other aspects of
> >SQL are 1 based:
> >
> >postgres=# select json_get('[1,2,3]', 1);
> >  json_get
> >--
> >  2
> >(1 row)
> >
> >postgres=# select json_get('[1,2,3]', 0);
> >  json_get
> >--
> >  1
> >(1 row)
> 
> Yes. it's intentional. SQL arrays might be 1-based by default, but
> JavaScript arrays are not. JsonPath and similar gadgets treat the
> arrays as zero-based. I suspect the Json-using community would not
> thank us for being overly SQL-centric on this - and I say that as
> someone who has always thought zero based arrays were a major design
> mistake, responsible for countless off-by-one errors.

Perhaps we could compromise by making arrays 0.5-based.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unlogged tables vs. GIST

2013-01-15 Thread Andres Freund
On 2013-01-15 20:58:00 +0200, Heikki Linnakangas wrote:
> On 15.01.2013 20:48, Tom Lane wrote:
> >Robert Haas  writes:
> >>On Tue, Jan 15, 2013 at 1:10 PM, Heikki Linnakangas
> >>  wrote:
> >>>Could we stash the counter e.g. in the root page of the index?
> >
> >>That would require maintaining a counter per table rather than a
> >>single global counter, which would be bad because then we'd need to
> >>store one counter in shared memory for every table, rather than just
> >>one, period, which runs up against the fixed sizing of shared memory.
> >
> >I think what Heikki had in mind was that the copy in the index would be
> >the authoritative one, not some image in shared memory.  This'd imply
> >dirtying the root page on every insert, as well as increased contention
> >for the root page, so it might have performance problems.
> 
> Not every insert, just every split. Which might still be a performance
> problem, but an order of magnitude smaller.

I might be dense here and I don't really know that code, but if its only
splits why not do an XLogInsert(XLOG_GIST_NSN) or something there?
Inventing some other form of logging just because its an unlogged table
seems like reinventing the wheel.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patches for TODO item: Avoid truncating empty OCDR temp tables on COMMIT

2013-01-15 Thread Tom Lane
Robert Haas  writes:
> On the third hand, the fact that a table is OCDR is recorded in
> backend-local storage somewhere, and that storage (unlike the
> relcache) had better be reliable.  So maybe there's some way to
> finesse it that way.

Hm, keep a flag in that storage you mean?  Yeah, that could possibly
work.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Curious buildfarm failures (fwd)

2013-01-15 Thread Sergey Koposov

On Tue, 15 Jan 2013, Andres Freund wrote:

Any chance you could run make check again but with log_statement=all and
log_min_messages=debug2? That might tell us a bit more, whether the
dependency code doesn't work right or whether the checkpoint is doing
strange things.


Here it is :


2013-01-15 23:06:18 MSK [50f5a8aa.1162:1] DEBUG:  SlruScanDirectory invoking 
callback on pg_notify/
2013-01-15 23:06:18 MSK [50f5a8aa.1162:2] DEBUG:  removing file "pg_notify/"
2013-01-15 23:06:18 MSK [50f5a8aa.1162:3] DEBUG:  max_safe_fds = 985, 
usable_fds = 1000, already_open = 5
2013-01-15 23:06:18 MSK [50f5a8aa.1167:1] LOG:  database system was shut down 
at 2013-01-15 23:06:18 MSK
2013-01-15 23:06:18 MSK [50f5a8aa.1167:2] DEBUG:  checkpoint record is at 
0/17700E0
2013-01-15 23:06:18 MSK [50f5a8aa.1167:3] DEBUG:  redo record is at 0/17700E0; 
shutdown TRUE
2013-01-15 23:06:18 MSK [50f5a8aa.1167:4] DEBUG:  next transaction ID: 0/686; 
next OID: 12031
2013-01-15 23:06:18 MSK [50f5a8aa.1167:5] DEBUG:  next MultiXactId: 1; next 
MultiXactOffset: 0
2013-01-15 23:06:18 MSK [50f5a8aa.1167:6] DEBUG:  oldest unfrozen transaction 
ID: 676, in database 1
2013-01-15 23:06:18 MSK [50f5a8aa.1167:7] DEBUG:  transaction ID wrap limit is 
2147484323, limited by database with OID 1
2013-01-15 23:06:18 MSK [50f5a8aa.1168:1] DEBUG:  checkpointer updated shared 
memory configuration values
2013-01-15 23:06:18 MSK [50f5a8aa.116b:1] LOG:  autovacuum launcher started
2013-01-15 23:06:18 MSK [50f5a8aa.1162:4] LOG:  database system is ready to 
accept connections
2013-01-15 23:06:19 MSK [50f5a8aa.1162:5] DEBUG:  forked new backend, pid=4463 
socket=8
2013-01-15 23:06:19 MSK [50f5a8aa.1162:6] DEBUG:  server process (PID 4463) 
exited with exit code 0
2013-01-15 23:06:19 MSK [50f5a8aa.1162:7] DEBUG:  forked new backend, pid=4465 
socket=8
2013-01-15 23:06:19 MSK [50f5a8ab.1171:1] LOG:  statement: CREATE DATABASE 
"regression" TEMPLATE=template0
2013-01-15 23:06:19 MSK [50f5a8aa.1168:2] LOG:  checkpoint starting: immediate 
force wait
2013-01-15 23:06:19 MSK [50f5a8aa.1168:3] DEBUG:  SlruScanDirectory invoking 
callback on pg_multixact/offsets/
2013-01-15 23:06:19 MSK [50f5a8aa.1168:4] DEBUG:  SlruScanDirectory invoking 
callback on pg_multixact/members/
2013-01-15 23:06:19 MSK [50f5a8aa.1168:5] DEBUG:  attempting to remove WAL 
segments older than log file 
2013-01-15 23:06:19 MSK [50f5a8aa.1168:6] DEBUG:  SlruScanDirectory invoking 
callback on pg_subtrans/
2013-01-15 23:06:19 MSK [50f5a8aa.1168:7] LOG:  checkpoint complete: wrote 3 
buffers (0.0%); 0 transaction log file(s) added, 0 removed, 0 recycled; 
write=0.001 s, sync=0.000 s, total=0.001 s; sync files=0, longest=0.000 s, 
average=0.000 s
2013-01-15 23:06:19 MSK [50f5a8aa.1168:8] LOG:  checkpoint starting: immediate 
force wait
2013-01-15 23:06:19 MSK [50f5a8aa.1168:9] DEBUG:  attempting to remove WAL 
segments older than log file 
2013-01-15 23:06:19 MSK [50f5a8aa.1168:10] DEBUG:  SlruScanDirectory invoking 
callback on pg_subtrans/
2013-01-15 23:06:19 MSK [50f5a8aa.1168:11] LOG:  checkpoint complete: wrote 0 
buffers (0.0%); 0 transaction log file(s) added, 0 removed, 0 recycled; 
write=0.001 s, sync=0.000 s, total=0.001 s; sync files=0, longest=0.000 s, 
average=0.000 s
2013-01-15 23:06:19 MSK [50f5a8aa.1162:8] DEBUG:  server process (PID 4465) 
exited with exit code 0
2013-01-15 23:06:19 MSK [50f5a8aa.1162:9] DEBUG:  forked new backend, pid=4467 
socket=8
2013-01-15 23:06:19 MSK [50f5a8ab.1173:1] LOG:  statement: ALTER DATABASE "regression" SET lc_messages TO 'C';ALTER 
DATABASE "regression" SET lc_monetary TO 'C';ALTER DATABASE "regression" SET lc_numeric TO 'C';ALTER DATABASE 
"regression" SET lc_time TO 'C';ALTER DATABASE "regression" SET timezone_abbreviations TO 'Default';
2013-01-15 23:06:19 MSK [50f5a8aa.1162:10] DEBUG:  server process (PID 4467) 
exited with exit code 0
2013-01-15 23:06:19 MSK [50f5a8aa.1162:11] DEBUG:  forked new backend, pid=4469 
socket=8
2013-01-15 23:06:19 MSK [50f5a8ab.1175:1] LOG:  statement: CREATE TABLESPACE 
testspace LOCATION '/home/math/pg_git/src/test/regress/testtablespace';
2013-01-15 23:06:19 MSK [50f5a8ab.1175:2] LOG:  statement: ALTER TABLESPACE 
testspace SET (random_page_cost = 1.0);
2013-01-15 23:06:19 MSK [50f5a8ab.1175:3] LOG:  statement: ALTER TABLESPACE 
testspace SET (some_nonexistent_parameter = true);
2013-01-15 23:06:19 MSK [50f5a8ab.1175:4] ERROR:  unrecognized parameter 
"some_nonexistent_parameter"
2013-01-15 23:06:19 MSK [50f5a8ab.1175:5] STATEMENT:  ALTER TABLESPACE 
testspace SET (some_nonexistent_parameter = true);
2013-01-15 23:06:19 MSK [50f5a8ab.1175:6] LOG:  statement: ALTER TABLESPACE 
testspace RESET (random_page_cost = 2.0);
2013-01-15 23:06:19 MSK [50f5a8ab.1175:7] ERROR:  RESET must not include values 
for parameters
2013-01-15 23:06:19 MSK [50f5a8ab.1175:8] STATEMENT:  ALTER TABLESPACE 
testspace RESET (random_page_cost = 2.0);
2013-01-15 23:

Re: [HACKERS] buffer assertion tripping under repeat pgbench load

2013-01-15 Thread Bruce Momjian
On Sun, Jan 13, 2013 at 12:34:07AM -0500, Greg Smith wrote:
> On 12/26/12 7:23 PM, Greg Stark wrote:
> >It's also possible it's a bad cpu, not bad memory. If it affects
> >decrement or increment in particular it's possible that the pattern of
> >usage on LocalRefCount is particularly prone to triggering it.
> 
> This looks to be the winning answer.  It turns out that under
> extended multi-hour loads at high concurrency, something related to
> CPU overheating was occasionally flipping a bit.  One round of
> compressed air for all the fans/vents, a little tweaking of the fan
> controls, and now the system goes >24 hours with no problems.

Odd your system didn't report the problem to you.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pkg-config files for libpq and ecpg

2013-01-15 Thread Peter Eisentraut
I'll take another stab at providing pkg-config files for the client-side
libraries.

The main reason this time around is that this works a lot better (or at
all) for multi-arch library installations.

Another is that pkg-config has become a lot smarter and flexible over
the years, and it's probably a better choice for users who are already
used to its interface.  There is a lot of confusion, for example, about
what pg_config --libs really means.  We often evade that by saying,
well, those are the libraries we linked with, but there is a lack of
clarity in that context about what libraries a user should link with.

The way it's implemented, it doesn't require manual maintenance, so it
should not be much of a bother.

A side issue that arose: libecpg_compat is linked with libpq, but
doesn't seem to use it.  This was added many years ago in
cd75f94dafd43358305811b7576ad75d889097e3, but it doesn't appear to be
required anymore.  Needs some checking.
diff --git a/.gitignore b/.gitignore
index 1e15ce5..b2854d9 100644
--- a/.gitignore
+++ b/.gitignore
@@ -21,6 +21,7 @@ lcov.info
 win32ver.rc
 *.exe
 lib*dll.def
+lib*.pc
 
 # Local excludes in root directory
 /GNUmakefile
diff --git a/src/Makefile.shlib b/src/Makefile.shlib
index 4da2f10..d9728ba 100644
--- a/src/Makefile.shlib
+++ b/src/Makefile.shlib
@@ -87,6 +87,7 @@ shlib_bare= lib$(NAME)$(DLSUFFIX)
 # Testing the soname variable is a reliable way to determine whether a
 # linkable library is being built.
 soname = $(shlib_major)
+pkgconfigdir = $(libdir)/pkgconfig
 else
 # Naming convention for dynamically loadable modules
 shlib  = $(NAME)$(DLSUFFIX)
@@ -305,6 +306,7 @@ all-lib: all-shared-lib
 ifdef soname
 # no static library when building a dynamically loadable module
 all-lib: all-static-lib
+all-lib: lib$(NAME).pc
 endif
 
 all-static-lib: $(stlib)
@@ -388,6 +390,18 @@ $(stlib): $(shlib) $(DLL_DEFFILE) | $(SHLIB_PREREQS)
 endif # PORTNAME == cygwin || PORTNAME == win32
 
 
+%.pc: $(MAKEFILE_LIST)
+   echo 'Name: lib$(NAME)' >$@
+   echo 'Description: PostgreSQL lib$(NAME) library' >>$@
+   echo 'Url: http://www.postgresql.org/' >>$@
+   echo 'Version: $(VERSION)' >>$@
+   echo 'Requires: ' >>$@
+   echo 'Requires.private: $(PKG_CONFIG_REQUIRES_PRIVATE)' >>$@
+   echo 'Cflags: -I$(includedir)' >>$@
+   echo 'Libs: -L$(libdir) -l$(NAME)' >>$@
+   echo 'Libs.private: $(filter-out 
$(PKG_CONFIG_REQUIRES_PRIVATE:lib%=-l%),$(filter-out -L..%, $(SHLIB_LINK)))' 
>>$@
+
+
 # We need several not-quite-identical variants of .DEF files to build
 # DLLs for Windows.  These are made from the single source file
 # exports.txt.  Since we can't assume that Windows boxes will have
@@ -430,8 +444,12 @@ endif # SHLIB_EXPORTS
 install-lib: install-lib-shared
 ifdef soname
 install-lib: install-lib-static
+install-lib: install-lib-pc
 endif
 
+install-lib-pc: lib$(NAME).pc installdirs-lib
+   $(INSTALL_DATA) $< '$(DESTDIR)$(pkgconfigdir)/lib$(NAME).pc'
+
 install-lib-static: $(stlib) installdirs-lib
$(INSTALL_STLIB) $< '$(DESTDIR)$(libdir)/$(stlib)'
 ifeq ($(PORTNAME), darwin)
@@ -467,7 +485,7 @@ endif
 
 installdirs-lib:
 ifdef soname
-   $(MKDIR_P) '$(DESTDIR)$(libdir)'
+   $(MKDIR_P) '$(DESTDIR)$(libdir)' '$(DESTDIR)$(pkgconfigdir)'
 else
$(MKDIR_P) '$(DESTDIR)$(pkglibdir)'
 endif
@@ -483,7 +501,8 @@ ifdef soname
rm -f '$(DESTDIR)$(libdir)/$(stlib)'
rm -f '$(DESTDIR)$(libdir)/$(shlib_bare)' \
  '$(DESTDIR)$(libdir)/$(shlib_major)' \
- '$(DESTDIR)$(libdir)/$(shlib)'
+ '$(DESTDIR)$(libdir)/$(shlib)' \
+ '$(DESTDIR)$(pkgconfigdir)/lib$(NAME).pc'
 else # no soname
rm -f '$(DESTDIR)$(pkglibdir)/$(shlib)'
 endif # no soname
@@ -495,7 +514,7 @@ endif # no soname
 
 .PHONY: clean-lib
 clean-lib:
-   rm -f $(shlib) $(shlib_bare) $(shlib_major) $(stlib) $(exports_file)
+   rm -f $(shlib) $(shlib_bare) $(shlib_major) $(stlib) $(exports_file) 
lib$(NAME).pc
 
 ifneq (,$(SHLIB_EXPORTS))
 maintainer-clean-lib:
diff --git a/src/interfaces/ecpg/compatlib/Makefile 
b/src/interfaces/ecpg/compatlib/Makefile
index 00adcdb..6ae1493 100644
--- a/src/interfaces/ecpg/compatlib/Makefile
+++ b/src/interfaces/ecpg/compatlib/Makefile
@@ -18,10 +18,10 @@ SO_MAJOR_VERSION= 3
 SO_MINOR_VERSION= 5
 
 override CPPFLAGS := -I../include -I$(top_srcdir)/src/interfaces/ecpg/include \
-   -I$(libpq_srcdir) -I$(top_srcdir)/src/include/utils $(CPPFLAGS)
+   -I$(top_srcdir)/src/include/utils $(CPPFLAGS)
 override CFLAGS += $(PTHREAD_CFLAGS)
 
-SHLIB_LINK = -L../ecpglib -lecpg -L../pgtypeslib -lpgtypes $(libpq) \
+SHLIB_LINK = -L../ecpglib -lecpg -L../pgtypeslib -lpgtypes \
$(filter -lintl -lm, $(LIBS)) $(PTHREAD_LIBS)
 SHLIB_PREREQS = submake-ecpglib submake-pgtypeslib
 
@@ -32,6 +32,8 @@ LIBS := $(filter-out -lpgport, $(LIBS))
 
 OBJS= informix.o $(filter snprintf.o, $(LIBOBJS))
 
+PKG_CONFIG_REQUIRES_PRIVATE = libecpg libpgtypes
+
 all: all-lib
 
 .

Re: [HACKERS] count(*) of zero rows returns 1

2013-01-15 Thread Bruce Momjian
On Mon, Jan 14, 2013 at 10:47:58PM -0500, Gurjeet Singh wrote:
> Interesting to note that SELECT * FROM table_with_zero_cols does not complain
> of anything.
>  
> postgres=# select * from test1;
> --
> (0 rows)
> 
> This I believe result of the fact that we allow user to drop all columns of a
> table.
> 
> On a side note, Postgres allows me to do this (which I don't think is a bug or
> useless): I inserted some rows into a table, and then dropped the columns. The
> resulting table has no columns, but live rows.
> 
> postgres=# select * from test_0_col_table ;
> --
> (20 rows)

I found this funny.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count(*) of zero rows returns 1

2013-01-15 Thread Bruce Momjian
On Mon, Jan 14, 2013 at 10:25:39PM -0500, Gurjeet Singh wrote:
> On Mon, Jan 14, 2013 at 3:09 PM, David Johnston  wrote:
> 
> What does "SELECT * FROM dual" in Oracle yield?
> 
> 
> AFAICR, 'dual' table has one column named 'DUMMY' and one row with value,
> single character X.

How elegant.  :-(

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] COPY .. COMPRESSED

2013-01-15 Thread Bruce Momjian
On Tue, Jan 15, 2013 at 10:55:04AM -0500, Stephen Frost wrote:
> * Peter Eisentraut (pete...@gmx.net) wrote:
> > Operating on compressed files transparently in file_fdw is obviously
> > useful, but why only gzip?
> 
> This isn't really an argument, imv.  It's only gzip *right this moment*
> because that's all that I implemented.  I've already offered to add
> bzip2 or whatever else people would like.

And this leads to support-my-compression-binary-of-the-day mess.  Why
not just allow them to do '|compression-binary'?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count(*) of zero rows returns 1

2013-01-15 Thread Shaun Thomas

On 01/15/2013 01:18 PM, Bruce Momjian wrote:


AFAICR, 'dual' table has one column named 'DUMMY' and one row with
value, single character X.


How elegant.  :-(


Let's see what EnterpriseDB produces:

test=> select * from dual;

 dummy
---
 X
(1 row)

Yep, elegant gets my vote. ;) But then again, Oracle also created 
VARCHAR2 and told everyone to start using that, just in case they ever 
modified VARCHAR to be SQL compliant. Thankfully we have you guys, so PG 
won't go down a similar route.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl idempotent option

2013-01-15 Thread Bruce Momjian
On Tue, Jan 15, 2013 at 10:25:23AM -0500, Tom Lane wrote:
> Alvaro Herrera  writes:
> > Vik Reykja escribi�:
> >> On Mon, Jan 14, 2013 at 4:22 PM, Tom Lane  wrote:
> >>> Idempotent is a ten-dollar word.  Can we find something that average
> >>> people wouldn't need to consult a dictionary to understand?
> 
> >> I disagree that we should dumb things down when the word means exactly what
> >> we want and based on the rest of this thread is the only word or word
> >> cluster that carries the desired meaning.
> 
> I'm not quite convinced that it means *exactly* what we want.  The
> dictionary definition, according to my laptop's dictionary, is "denoting
> an element of a set that is unchanged in value when multiplied or
> otherwise operated on by itself".  I'm well aware that computer people
> often use it to mean "an operation that doesn't change the system state
> if the state is already what's wanted", but I think that's probably an
> abuse of the mathematical usage.  And in any case, I'm not sure that
> non-hackers would immediately recognize the term, nor be enlightened by
> their dictionaries.  But ...

I have heard idempotent used several times by our folks, and I didn't
know what it meant either.  I figured it was a "strong item".  ;-)  I
just looked it up.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl idempotent option

2013-01-15 Thread Bruce Momjian
On Tue, Jan 15, 2013 at 10:55:41AM -0500, Peter Eisentraut wrote:
> On 1/14/13 10:22 AM, Tom Lane wrote:
> > Idempotent is a ten-dollar word.  Can we find something that average
> > people wouldn't need to consult a dictionary to understand?
> 
> My suggestion in the original thread was --oknodo, but people didn't
> like that either.

That's Japanese for idempotent.  ;-)  LOL

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl idempotent option

2013-01-15 Thread Boszormenyi Zoltan

2013-01-15 20:28 keltezéssel, Bruce Momjian írta:

On Tue, Jan 15, 2013 at 10:25:23AM -0500, Tom Lane wrote:

Alvaro Herrera  writes:

Vik Reykja escribi�:

On Mon, Jan 14, 2013 at 4:22 PM, Tom Lane  wrote:

Idempotent is a ten-dollar word.  Can we find something that average
people wouldn't need to consult a dictionary to understand?

I disagree that we should dumb things down when the word means exactly what
we want and based on the rest of this thread is the only word or word
cluster that carries the desired meaning.

I'm not quite convinced that it means *exactly* what we want.  The
dictionary definition, according to my laptop's dictionary, is "denoting
an element of a set that is unchanged in value when multiplied or
otherwise operated on by itself".  I'm well aware that computer people
often use it to mean "an operation that doesn't change the system state
if the state is already what's wanted", but I think that's probably an
abuse of the mathematical usage.  And in any case, I'm not sure that
non-hackers would immediately recognize the term, nor be enlightened by
their dictionaries.  But ...

I have heard idempotent used several times by our folks, and I didn't
know what it meant either.  I figured it was a "strong item".  ;-)  I
just looked it up.


The math term used in Hungarian for idempotent is mirror translated
to "leave it in place". The term also has a slang usage for beating somebody 
up. ;-)

--
--
Zoltán Böszörményi
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Curious buildfarm failures (fwd)

2013-01-15 Thread Tom Lane
Sergey Koposov  writes:
> And I do see the tblspc file left after the finish of "make check":
>   tmp_check/data/pg_tblspc/16385/PG_9.3_201212081/16384/16387

Interesting.  If the tests are run immediately after initdb, 16387
is the relfilenode assigned to table "foo" in the tablespace regression
test.  But why would only that table be left behind?  There are half
a dozen relations in that tablespace at the point of the DROP CASCADE.

BTW, I just finished trying to reproduce this on an IA64 machine
belonging to Red Hat, without success.  So that seems to eliminate
any possibility of the machine architecture being the trigger issue.
The compiler's still a likely cause though.

Anybody have a similar ICC version (dugong's says it is 10.0 20070809)
to try?  Also, Sergey, can you find a non-dot-zero release to try?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-15 Thread Robert Haas
On Sun, Jan 13, 2013 at 4:16 PM, Tom Lane  wrote:
> Andres Freund  writes:
>>> Basically, the aspects of this that I think are likely to be
>>> reproducible wins across different platforms are (a) teaching the
>>> compiler that elog(ERROR) doesn't return, and (b) reducing code size as
>>> much as possible.  The single-function change isn't going to help on
>>> either ground --- maybe it would have helped on (b) without the errno
>>> problem, but that's going to destroy any possible code size savings.
>
>> Agreed. I am happy to produce an updated patch unless youre already on
>> it?
>
> On it now (busy testing on some old slow boxes, else I'd be done already).

Just a random thought here...

There are an awful lot of places in our source tree where the error
level is fixed.  We could invent a new construct, say ereport_error or
so, that is just like ereport except that it takes no error-level
parameter because it's hard-coded to ERROR.

It would be a bit of a pain to change all of the existing call sites,
but presumably it would dodge a lot of these issues about the way
compilers optimize things, because we could simply say categorically
that ereport_error NEVER returns.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PERFORM] Slow query: bitmap scan troubles

2013-01-15 Thread Bruce Momjian
On Mon, Jan 14, 2013 at 12:56:37PM -0500, Tom Lane wrote:
> > The reported behavior was that the planner would prefer to
> > sequential-scan the table rather than use the index, even if
> > enable_seqscan=off.  I'm not sure what the query looked like, but it
> > could have been something best implemented as a nested loop w/inner
> > index-scan.
> 
> Remember also that "enable_seqscan=off" merely adds 1e10 to the
> estimated cost of seqscans.  For sufficiently large tables this is not
> exactly a hard disable, just a thumb on the scales.  But I don't know
> what your definition of "extremely large indexes" is.

Wow, do we need to bump up that value based on larger modern hardware?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] json api WIP patch

2013-01-15 Thread Merlin Moncure
On Tue, Jan 15, 2013 at 1:04 PM, David Fetter  wrote:
> On Mon, Jan 14, 2013 at 07:52:56PM -0500, Andrew Dunstan wrote:
>>
>> On 01/14/2013 07:36 PM, Merlin Moncure wrote:
>> >While testing this I noticed that integer based 'get' routines are
>> >zero based -- was this intentional?  Virtually all other aspects of
>> >SQL are 1 based:
>> >
>> >postgres=# select json_get('[1,2,3]', 1);
>> >  json_get
>> >--
>> >  2
>> >(1 row)
>> >
>> >postgres=# select json_get('[1,2,3]', 0);
>> >  json_get
>> >--
>> >  1
>> >(1 row)
>>
>> Yes. it's intentional. SQL arrays might be 1-based by default, but
>> JavaScript arrays are not. JsonPath and similar gadgets treat the
>> arrays as zero-based. I suspect the Json-using community would not
>> thank us for being overly SQL-centric on this - and I say that as
>> someone who has always thought zero based arrays were a major design
>> mistake, responsible for countless off-by-one errors.
>
> Perhaps we could compromise by making arrays 0.5-based.

Well, I'm not prepared to argue with Andrew in this one.  It was
surprising behavior to me, but that's sample size one.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] passing diff options to pg_regress

2013-01-15 Thread Robert Haas
On Mon, Jan 14, 2013 at 6:57 AM, Peter Eisentraut  wrote:
> I sometimes find it useful to view a regression test difference using
> other diff options, such as -u -w or more context.  There is currently
> no easy way to accomplish that.
>
> I suggest allowing to override the diff options using an environment
> variable, such as PG_REGRESS_DIFF_OPTS.  The patch is very small.

Great idea.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [pgsql-cluster-hackers] Save The Date: Cluster-Hackers meeting May 21st

2013-01-15 Thread Josh Berkus

> Given last year we didn't have more than 30 mins to spend on logical
> replication, how much time will we have to discuss non-open source
> systems?

Mostly the idea is to discuss what some of the proprietary systems need
from core postgres, especially where it overlaps with what OSS systems
need and where the sponsoring companies are

> 
> Can I check whether the agenda of this meeting is open?
> 

Given that we haven't even put up a wiki page yet or set an hourly
schedule, it's completely open.


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] COPY .. COMPRESSED

2013-01-15 Thread Stephen Frost
* Bruce Momjian (br...@momjian.us) wrote:
> On Tue, Jan 15, 2013 at 10:55:04AM -0500, Stephen Frost wrote:
> > * Peter Eisentraut (pete...@gmx.net) wrote:
> > > Operating on compressed files transparently in file_fdw is obviously
> > > useful, but why only gzip?
> > 
> > This isn't really an argument, imv.  It's only gzip *right this moment*
> > because that's all that I implemented.  I've already offered to add
> > bzip2 or whatever else people would like.
> 
> And this leads to support-my-compression-binary-of-the-day mess.  Why
> not just allow them to do '|compression-binary'?

You're right, to clarify, for *file_fdw*, which is a backend-only
operation, the popen patch is great (thought I made that clear before).

The popen patch doesn't support the '|compression-binary' option through
the FE protocol.  Even if it did, it would only be available for
superusers as we can't allow regular users to run arbitrary commands on
the server-side.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Get current query in a trigger function

2013-01-15 Thread Robert Haas
On Mon, Jan 14, 2013 at 9:38 PM, Vlad Arkhipov  wrote:
>>> Is there any simple way of getting a query for which a trigger was
>>> executed?
>>> debug_query_string and ActivePortal->sourceText return the top query when
>>> there are nested triggers.
>>
>> I believe - only if the trigger is written in C.
>>
> Yes, the trigger is written in C. But I didn't find any way to get access to
> the current EState or QueryDesc from a trigger function. The only common
> place of a trigger and the corresponding EState/QueryDesc structs seems to
> be CurrentMemoryContext in a trigger function, which ancestor has to be (?)
> EState->es_query_cxt. It's an ugly solution of course.

Ah, I see.  Yeah, I don't know of a good way to do that, although
there may be one.

> P.S. Is it a mistype EState->es_query_cxt? Should it be es_query_ctx?

I don't see why either of those is particularly better than the other.
 "context" has a t both before and after the x.

...Robert

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [sepgsql 1/3] add name qualified creation label

2013-01-15 Thread Kohei KaiGai
This patch adds sepgsql the feature of name qualified creation label.

Background, on creation of a certain database object, sepgsql assigns
a default security label according to the security policy that has a set of
rules to determine a label of new object.
Usually, a new object inherits its parent (e.g table is a parent of column)
object's label, unless it has a particular type_transition rule in the policy.
Type_transition rule allows to describe a particular security label as
default label of new object towards a pair of client and parent object.
For example, the below rule says columns constructed under the table
labeled as "sepgsql_table_t" by client with "staff_t" will have
"staff_column_t", instead of table's label.
  TYPE_TRANSITION staff_t sepgsql_table_t:db_column staff_column_t;

Recently, this rule was enhanced to take 5th argument for object name;
that enables to special case handling exceptionally.
It was originally designed to describe default security labels for files in
/etc directory, because many application put its own configuration files
here, thus, traditional type_transition rule was poor to describe all the
needed defaults.
On the other hand, we can port this concept of database system also.
One example is temporary objects being constructed under the pg_temp
schema. If we could assign a special default label on this, it allows
unprivileged users (who cannot create persistent tables) to create
temporary tables that has no risk of information leak to other users.
Otherwise, we may be able to assign a special security label on
system columns and so on.

>From the perspective of implementation on sepgsql side, all we need
to do is replace old security_compute_create_raw() interface by new
security_compute_create_name_raw().
If here is no name qualified type_transition rules, it performs as if
existing API, so here is no backword compatible issue.

This patch can be applied on the latest master branch.

Thanks,
-- 
KaiGai Kohei 


sepgsql-v9.3-creation-label-with-name.v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)

2013-01-15 Thread Tom Lane
Robert Haas  writes:
> There are an awful lot of places in our source tree where the error
> level is fixed.  We could invent a new construct, say ereport_error or
> so, that is just like ereport except that it takes no error-level
> parameter because it's hard-coded to ERROR.

> It would be a bit of a pain to change all of the existing call sites,
> but presumably it would dodge a lot of these issues about the way
> compilers optimize things, because we could simply say categorically
> that ereport_error NEVER returns.

Meh.  We've already got it working, and in a way that doesn't require
the compiler to understand __attribute__((noreturn)) --- it only has
to be aware that abort() doesn't return, in one fashion or another.
So I'm disinclined to run around and change a few thousand call sites,
much less expect extension authors to do so too.

(By my count there are about six thousand places we'd have to change.)

Note that whatever's going on on dugong is not a counterexample to
"got it working", because presumably dugong would also be misbehaving
if we'd used a different method of flagging all the ereports/elogs
as nonreturning.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] erroneous restore into pg_catalog schema

2013-01-15 Thread Robert Haas
On Sun, Jan 13, 2013 at 4:09 PM, Tom Lane  wrote:
> Right, that is the argument for ignoring missing schemas, and I think it
> is entirely sensible for *search* activities.  But allowing *creation*
> to occur in an indeterminate schema is a horrid idea.

But the default search path is $user, public; and of those two, only
the latter exists by default.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] erroneous restore into pg_catalog schema

2013-01-15 Thread Robert Haas
On Mon, Jan 14, 2013 at 2:07 PM, Alvaro Herrera
 wrote:
> Tom Lane escribió:
>> Alvaro Herrera  writes:
>
>> > alvherre=# create extension adminpack;
>> > ERROR:  permission denied for schema pg_catalog
>>
>> Um.  I knew that that module's desire to shove stuff into pg_catalog
>> would bite us someday.  But now that I think about it, I'm pretty sure
>> I recall discussions to the effect that there are other third-party
>> modules doing similar things.
>
> How about we provide a superuser-only function that an extension can
> call which will set enableSystemTableMods?  It would get back
> automatically to the default value on transaction end.  That way,
> extensions that wish to install stuff in pg_catalog can explicitely
> declare it, i, and the rest of the world enjoys consistent protection.

Or just document the existing GUC and make it something less than
PGC_POSTMASTER, like maybe PGC_SUSER.

But, really, I think allow_system_table_mods paints with too broad a
brush.  It allows both things that are relatively OK (like creating a
function in pg_catalog) and things that are rampantly insane (like
dropping a column from pg_proc).  It might be a good idea to make
those things controlled by two different switches.

Or perhaps there is some other way to make sure that the user "really
meant it", like refusing to create in pg_catalog unless the schema
name is given explicitly.  I kind of like that idea, actually.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PERFORM] Slow query: bitmap scan troubles

2013-01-15 Thread Tom Lane
Bruce Momjian  writes:
> On Mon, Jan 14, 2013 at 12:56:37PM -0500, Tom Lane wrote:
>> Remember also that "enable_seqscan=off" merely adds 1e10 to the
>> estimated cost of seqscans.  For sufficiently large tables this is not
>> exactly a hard disable, just a thumb on the scales.  But I don't know
>> what your definition of "extremely large indexes" is.

> Wow, do we need to bump up that value based on larger modern hardware?

I'm disinclined to bump it up very much.  If it's more than about 1e16,
ordinary cost contributions would disappear into float8 roundoff error,
causing the planner to be making choices that are utterly random except
for minimizing the number of seqscans.  Even at 1e14 or so you'd be
losing a lot of finer-grain distinctions.  What we want is for the
behavior to be "minimize the number of seqscans but plan normally
otherwise", so those other cost contributions are still important.

Anyway, at this point we're merely speculating about what's behind
Robert's report --- I'd want to see some concrete real-world examples
before changing anything.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] json api WIP patch

2013-01-15 Thread Andrew Dunstan


On 01/15/2013 02:47 PM, Merlin Moncure wrote:

On Tue, Jan 15, 2013 at 1:04 PM, David Fetter  wrote:

On Mon, Jan 14, 2013 at 07:52:56PM -0500, Andrew Dunstan wrote:

On 01/14/2013 07:36 PM, Merlin Moncure wrote:

While testing this I noticed that integer based 'get' routines are
zero based -- was this intentional?  Virtually all other aspects of
SQL are 1 based:

postgres=# select json_get('[1,2,3]', 1);
  json_get
--
  2
(1 row)

postgres=# select json_get('[1,2,3]', 0);
  json_get
--
  1
(1 row)

Yes. it's intentional. SQL arrays might be 1-based by default, but
JavaScript arrays are not. JsonPath and similar gadgets treat the
arrays as zero-based. I suspect the Json-using community would not
thank us for being overly SQL-centric on this - and I say that as
someone who has always thought zero based arrays were a major design
mistake, responsible for countless off-by-one errors.

Perhaps we could compromise by making arrays 0.5-based.

Well, I'm not prepared to argue with Andrew in this one.  It was
surprising behavior to me, but that's sample size one.




I doubt I'm very representative either. People like David Wheeler, Taras 
Mitran, Joe Van Dyk, and the Heroku guys would be better people to ask 
than me. I'm quite prepared to change it if that's the consensus.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] erroneous restore into pg_catalog schema

2013-01-15 Thread Tom Lane
Robert Haas  writes:
> Or perhaps there is some other way to make sure that the user "really
> meant it", like refusing to create in pg_catalog unless the schema
> name is given explicitly.  I kind of like that idea, actually.

That does seem attractive at first glance.  Did you have an
implementation in mind?  The idea that comes to mind for me is to hack
namespace.c, either to prevent activeCreationNamespace from getting set
to "pg_catalog" in the first place, or to throw error in
LookupCreationNamespace and friends.  I am not sure though if
LookupCreationNamespace et al ever get called in contexts where no
immediate object creation is intended (and thus maybe an error wouldn't
be appropriate).

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Curious buildfarm failures (fwd)

2013-01-15 Thread Sergey Koposov

On Tue, 15 Jan 2013, Tom Lane wrote:


BTW, I just finished trying to reproduce this on an IA64 machine
belonging to Red Hat, without success.  So that seems to eliminate
any possibility of the machine architecture being the trigger issue.
The compiler's still a likely cause though.
Anybody have a similar ICC version (dugong's says it is 10.0 20070809)
to try?  Also, Sergey, can you find a non-dot-zero release to try?


I think it is indeed the main issue.
I've tried 10.1 ( 10.1.011 ) and it doesn't fail.

When I tried 11.1 (icc (ICC) 11.1 20100401 ) it failed in a quite strange 
way (I don't quite remember it happenning before)


test tablespace   ... ok
parallel group (18 tests):  txid int2 text name oid varchar int4 char money
float8 uuid float4 int8 boolean bit enum numeric rangetypes
 boolean  ... ok
 char ... ok
 name ... ok
 varchar  ... ok
 text ... ok
 int2 ... ok
 int4 ... ok
 int8 ... ok
 oid  ... ok
 float4   ... ok
 float8   ... ok
 bit  ... ok
 numeric  ... ok
 txid ... ok
 uuid ... ok
 enum ... ok
 money... ok
 rangetypes   ... FAILED (test process exited with exit code
 2)
test strings  ... FAILED (test process exited with exit code
2)
test numerology   ... FAILED (test process exited with exit code
2)
parallel group (19 tests):  path interval time inet circle macaddr comments
timestamp timestamptz reltime date tstypes tinterval abstime timetz lseg box
polygon point
 point... FAILED (test process exited with exit code
2)
 lseg ... FAILED (test process exited with exit code
2)
 box  ... FAILED (test process exited with exit code
2)
 path ... FAILED (test process exited with exit code
2)
 polygon  ... FAILED (test process exited with exit code
2)
 circle   ... FAILED (test process exited with exit code
2)
 date ... FAILED (test process exited with exit code
2)
 time ... FAILED (test process exited with exit code
2)
 timetz   ... FAILED (test process exited with exit code
2)
 timestamp... FAILED (test process exited with exit code
2)
 timestamptz  ... FAILED (test process exited with exit code
2)
 interval ... FAILED (test process exited with exit code
2)
 abstime  ... FAILED (test process exited with exit code
2)
 reltime  ... FAILED (test process exited with exit code
2)
 tinterval... FAILED (test process exited with exit code
2)
 inet ... FAILED (test process exited with exit code
2)
 macaddr  ... FAILED (test process exited with exit code
2)
 tstypes  ... FAILED (test process exited with exit code
2)
 comments ... FAILED (test process exited with exit code
2)
parallel group (6 tests):  geometry regex horology type_sanity oidjoins
opr_sanity
 geometry ... FAILED
 horology ... FAILED
 regex... ok
 oidjoins ... ok
 type_sanity  ... ok
 opr_sanity   ... ok
test insert   ... ok
test create_function_1... ok
test create_type  ... ok
test create_table ... ok
test create_function_2... ok
parallel group (2 tests):  copyselect copy
 copy ... ok
 copyselect   ... ok
parallel group (2 tests):  create_operator create_misc
 create_misc  ... ok
 create_operator  ... ok
parallel group (2 tests):  create_view create_index
 create_index ... FAILED (test process exited with exit code
2)
 create_view  ... ok
parallel group (11 tests):  constraints triggers create_cast
create_function_3 updatable_views inherit drop_if_exists create_aggregate
create_table_like typed_table vacuum
 create_aggregate ... FAILED (test process exited with exit code
2)
 create_function_3... FAILED (test process exited with exit code
2)
 create_cast  ... FAILED (test process exited with exit code
2)
 constraints  ... FAILED (test process exited with exit code
2)
 triggers ... FAILED (test process exited with exit code
2)
 inherit  ... FAILED (test process exited with exit code
2)
 create_table_like... FAILED (test process exite

Re: [HACKERS] [PATCH] COPY .. COMPRESSED

2013-01-15 Thread Christopher Browne
On Tue, Jan 15, 2013 at 2:53 PM, Stephen Frost  wrote:
> The popen patch doesn't support the '|compression-binary' option through
> the FE protocol.  Even if it did, it would only be available for
> superusers as we can't allow regular users to run arbitrary commands on
> the server-side.

That points towards a fix that involves having a set of non-arbitrary commands
that we allow plain users to use.

Hmm.  There's an interesting thought...

How about having a "pg_filters" table in pg_catalog which allows capturing
labels and names of known-to-be-safe binary filters:

insert into pg_filters (label, location)
values
('zcat', '/usr/bin/zcat'),
('bzip2', '/usr/bin/bzip2'),
('bunzip2', '/usr/bin/bunzip2');

And then having some capability to grant permissions to roles to use
these filters.

That's not a "version 1" capability...  Suppose we have, in 9.3, that there are
direct references to "|/usr/bin/zcat" (and such), and then hope, in
9.4, to tease
this out to be a non-superuser-capable facility via the above pg_filters?

These filters should be useful for FDWs as well as for COPY.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] erroneous restore into pg_catalog schema

2013-01-15 Thread Robert Haas
On Tue, Jan 15, 2013 at 3:22 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> Or perhaps there is some other way to make sure that the user "really
>> meant it", like refusing to create in pg_catalog unless the schema
>> name is given explicitly.  I kind of like that idea, actually.
>
> That does seem attractive at first glance.  Did you have an
> implementation in mind?  The idea that comes to mind for me is to hack
> namespace.c, either to prevent activeCreationNamespace from getting set
> to "pg_catalog" in the first place, or to throw error in
> LookupCreationNamespace and friends.  I am not sure though if
> LookupCreationNamespace et al ever get called in contexts where no
> immediate object creation is intended (and thus maybe an error wouldn't
> be appropriate).

As far as I can see, the principle place we'd want to hack would be
recomputeNamespacePath(), so that activeCreationNamespace never ends
up pointing to pg_catalog even if that's explicitly listed in
search_path.  The places where we actually work out what schema to use
are RangeVarGetCreationNamespace() and
QualifiedNameGetCreationNamespace(), but those don't seem like they'd
need any adjustment, unless perhaps we wish to whack around the "no
schema has been selected to create in" error message in some way.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] COPY .. COMPRESSED

2013-01-15 Thread Bruce Momjian
On Tue, Jan 15, 2013 at 03:37:07PM -0500, Christopher Browne wrote:
> On Tue, Jan 15, 2013 at 2:53 PM, Stephen Frost  wrote:
> > The popen patch doesn't support the '|compression-binary' option through
> > the FE protocol.  Even if it did, it would only be available for
> > superusers as we can't allow regular users to run arbitrary commands on
> > the server-side.
> 
> That points towards a fix that involves having a set of non-arbitrary commands
> that we allow plain users to use.
> 
> Hmm.  There's an interesting thought...
> 
> How about having a "pg_filters" table in pg_catalog which allows capturing
> labels and names of known-to-be-safe binary filters:
> 
> insert into pg_filters (label, location)
> values
> ('zcat', '/usr/bin/zcat'),
> ('bzip2', '/usr/bin/bzip2'),
> ('bunzip2', '/usr/bin/bunzip2');
> 
> And then having some capability to grant permissions to roles to use
> these filters.
> 
> That's not a "version 1" capability...  Suppose we have, in 9.3, that there 
> are
> direct references to "|/usr/bin/zcat" (and such), and then hope, in
> 9.4, to tease
> this out to be a non-superuser-capable facility via the above pg_filters?
> 
> These filters should be useful for FDWs as well as for COPY.

Well, COPY is super-user only, so it seems only useful for FDW, no?  We
already have lots of user-configuration FDW commands, so I can see
adding this one too.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count(*) of zero rows returns 1

2013-01-15 Thread Robert Haas
On Tue, Jan 15, 2013 at 2:26 PM, Shaun Thomas  wrote:
> Let's see what EnterpriseDB produces:
>
> test=> select * from dual;
>
>  dummy
> ---
>  X
> (1 row)

Hey, don't blame us.  We didn't come up with this bad idea ... just
trying to make life easier for those who are used to it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [sepgsql 3/3] Add db_procedure:execute permission checks

2013-01-15 Thread Kohei KaiGai
This patch adds sepgsql support for permission checks almost
equivalent to the existing FUNCTION EXECUTE privilege.

This feature is constructed on new OAT_FUNCTION_EXEC event
type being invoked around pg_proc_aclcheck() except for cases
when function's permissions are checked during CREATE or
ALTER commands. (Extension can handle these cases on
OAT_POST_CREATE or OAT_POST_ALTER hooks if needed.)

This patch assumes db_schema:{search} patch is applied on top.
So, please also check the patches below...
https://commitfest.postgresql.org/action/patch_view?id=1003
https://commitfest.postgresql.org/action/patch_view?id=1065

Thanks,
-- 
KaiGai Kohei 


sepgsql-v9.3-function-execute-permission.v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_ctl idempotent option

2013-01-15 Thread Josh Berkus
On 01/15/2013 07:55 AM, Peter Eisentraut wrote:
> On 1/14/13 10:22 AM, Tom Lane wrote:
>> Idempotent is a ten-dollar word.  Can we find something that average
>> people wouldn't need to consult a dictionary to understand?
> 
> My suggestion in the original thread was --oknodo, but people didn't
> like that either.

I'm pretty sure that Oknodo is an island in the Pacific.  ;-)

I don't have a better idea for a name, though.


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unlogged tables vs. GIST

2013-01-15 Thread Robert Haas
On Tue, Jan 15, 2013 at 1:58 PM, Heikki Linnakangas
 wrote:
>> I think what Heikki had in mind was that the copy in the index would be
>> the authoritative one, not some image in shared memory.  This'd imply
>> dirtying the root page on every insert, as well as increased contention
>> for the root page, so it might have performance problems.
>
> Not every insert, just every split. Which might still be a performance
> problem, but an order of magnitude smaller.

I think that might be acceptable from a performance point of view -
after all, if the index is unlogged, you're saving the cost of WAL -
but I guess I still prefer a generic solution to this problem (a
generalization of GetXLogRecPtrForTemp) rather than a special-purpose
solution based on the nitty-gritty of how GiST uses these values.
What's the difference between storing this value in pg_control and,
say, the OID counter?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] COPY .. COMPRESSED

2013-01-15 Thread Stephen Frost
* Christopher Browne (cbbro...@gmail.com) wrote:
> How about having a "pg_filters" table in pg_catalog which allows capturing
> labels and names of known-to-be-safe binary filters:

I was considering that (though I was thinking they'd be
"transformations" rather than filters; filter implies that you're
removing something, imv), but as I mentioned upthread, there are dangers
in that direction and having a default set of options strikes me as a
lot more challenging to provide.

> insert into pg_filters (label, location)
> values
> ('zcat', '/usr/bin/zcat'),
> ('bzip2', '/usr/bin/bzip2'),
> ('bunzip2', '/usr/bin/bunzip2');

We'd need to include which direction is supported also, I think.

> And then having some capability to grant permissions to roles to use
> these filters.

Yes, an additional ACL system, as I mentioned upthread, would be
required for this.

> That's not a "version 1" capability...  Suppose we have, in 9.3, that there 
> are
> direct references to "|/usr/bin/zcat" (and such), and then hope, in
> 9.4, to tease
> this out to be a non-superuser-capable facility via the above pg_filters?

It would be good to flush out what the syntax, etc, would look like for
this, if we're going to support it, before we go down a road that limits
us in what we can do.  For example, if we implement the existing popen
call, and then later want to allow non-superusers to use certain
filters, how would the non-superuser specify the filter?  I really don't
think we want to be taking the shell-like command provided by a
non-superuser and then try to match that against a list of commands in a
table..

> These filters should be useful for FDWs as well as for COPY.

I'm not sure I see how any FDW beyond file_fdw would really benefit from
this..?  I don't think a MySQL FDW or Reddis FDW would gain anything...

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] [PATCH] COPY .. COMPRESSED

2013-01-15 Thread Stephen Frost
* Bruce Momjian (br...@momjian.us) wrote:
> Well, COPY is super-user only, so it seems only useful for FDW, no?  We
> already have lots of user-configuration FDW commands, so I can see
> adding this one too.

COPY is most certainly not superuser-only..  COPY w/ popen, if that
popen can call anything, would certainly have to be superuser-only.

COPY TO STDOUT / FROM STDIN is available to and used a huge amount by
non-superusers.  Would be great if we could allow that to work with
compressed data also, imv.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] json api WIP patch

2013-01-15 Thread Daniel Farina
On Tue, Jan 15, 2013 at 12:17 PM, Andrew Dunstan  wrote:
>
> On 01/15/2013 02:47 PM, Merlin Moncure wrote:
>>
>> On Tue, Jan 15, 2013 at 1:04 PM, David Fetter  wrote:
>>>
>>> On Mon, Jan 14, 2013 at 07:52:56PM -0500, Andrew Dunstan wrote:

 On 01/14/2013 07:36 PM, Merlin Moncure wrote:
>
> While testing this I noticed that integer based 'get' routines are
> zero based -- was this intentional?  Virtually all other aspects of
> SQL are 1 based:
>
> postgres=# select json_get('[1,2,3]', 1);
>   json_get
> --
>   2
> (1 row)
>
> postgres=# select json_get('[1,2,3]', 0);
>   json_get
> --
>   1
> (1 row)

 Yes. it's intentional. SQL arrays might be 1-based by default, but
 JavaScript arrays are not. JsonPath and similar gadgets treat the
 arrays as zero-based. I suspect the Json-using community would not
 thank us for being overly SQL-centric on this - and I say that as
 someone who has always thought zero based arrays were a major design
 mistake, responsible for countless off-by-one errors.
>>>
>>> Perhaps we could compromise by making arrays 0.5-based.
>>
>> Well, I'm not prepared to argue with Andrew in this one.  It was
>> surprising behavior to me, but that's sample size one.
>
> I doubt I'm very representative either. People like David Wheeler, Taras
> Mitran, Joe Van Dyk, and the Heroku guys would be better people to ask than
> me. I'm quite prepared to change it if that's the consensus.

Hello.

I'm inclined to go with the same gut feeling you had (zero-based-indexing).

Here is the background for my reasoning:

The downside of zero-based-indexing is that people who want to use
multiple sequential container types will inevitably have to deal with
detailed and not easily type-checked integer coordinates that mean
different things in each domain that will, no doubt, lead to a number
of off-by-one errors.  Nevertheless, this cost is already paid because
one of the first things many people will do in programs generating SQL
queries is try to zero-index a SQL array, swear a bit after figuring
things out (because a NULL will be generated, not an error), and then
adjust all the offsets. So, this is not a new problem.  On many
occasions I'm sure this has caused off-by-one bugs, or the NULLs
slipped through testing and delivered funny results, yet the world
moves on.

On the other hand, the downside of going down the road of 1-based
indexing and attempting to attain relative sameness to SQL arrays, it
would also feel like one would be obliged to implement SQL array
infelicities like 'out of bounds' being SQL NULL rather than an error,
related to other spectres like non-rectangular nested arrays.  SQL
array semantics are complex and The Committee can change them or --
slightly more likely -- add interactions, so it seems like a general
expectation that Postgres container types that happen to have any
reasonable ordinal addressing will implement some level of same-ness
with SQL arrays is a very messy one.  As such, if it becomes customary
to implement one-based indexing of containers, I think such customs
are best carefully circumscribed so that attempts to be 'like' SQL
arrays are only as superficial as that.

What made me come down on the side of zero-based indexing in spite of
the weaknesses are these two reasons:

* The number of people who use JSON and zero-based-indexing is very
  large, and furthermore, within that set the number that know that
  SQL even defines array support -- much less that Postgres implements
  it -- is much smaller. Thus, one is targeting cohesion with a fairly
  alien concept that is not understood by the audience.

* Maintaining PL integrated code that uses both 1-based indexing in PG
  functions and 0-based indexing in embedded languages that are likely
  to be combined with JSON -- doesn't sound very palatable, and the
  use of such PLs (e.g. plv8) seems pretty likely, too.  That can
  probably be a rich source of bugs and frustration.

If one wants SQL array semantics, it seems like the right way to get
them is coercion to a SQL array value.  Then one will receive SQL
array semantics exactly.

--
fdr


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >