Re: postgres_fdw vs data formatting GUCs (was Re: [HACKERS] [v9.3] writable foreign tables)

2013-03-20 Thread Daniel Farina
On Tue, Mar 19, 2013 at 10:37 PM, Daniel Farina dan...@heroku.com wrote:
 I added programming around various NULL returns reading GUCs in this
 revision, v4.

Okay, one more of those fridge-logic bugs.  Sorry for the noise. v5.

A missing PG_RETHROW to get the properly finally-esque semantics:

--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -642,7 +642,10 @@ dblink_fetch(PG_FUNCTION_ARGS)
  }
  PG_CATCH();
  {
+ /* Pop any set GUCs, if necessary */
  restoreLocalGucs(rgs);
+
+ PG_RE_THROW();
  }
  PG_END_TRY();

This was easy to add a regression test to exercise, and so I did (not
displayed here).

--
fdr


dblink-guc-sensitive-types-v5.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Ignore invalid indexes in pg_dump

2013-03-20 Thread Simon Riggs
On 20 March 2013 02:51, Michael Paquier michael.paqu...@gmail.com wrote:

 If failures happen with CREATE INDEX CONCURRENTLY, the system will be let
 with invalid indexes. I don't think that the user would like to see invalid
 indexes of
 an existing system being recreated as valid after a restore.
 So why not removing from a dump invalid indexes with something like the
 patch
 attached?
 This should perhaps be applied in pg_dump for versions down to 8.2 where
 CREATE
 INDEX CONCURRENTLY has been implemented?

Invalid also means currently-in-progress, so it would be better to keep them in.

 I noticed some recent discussions about that:
 http://www.postgresql.org/message-id/20121207141236.gb4...@alvh.no-ip.org
 In this case the problem has been fixed in pg_upgrade directly.

That is valid because the index build is clearly not in progress.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] machine-parseable object descriptions

2013-03-20 Thread Dimitri Fontaine
Alvaro Herrera alvhe...@2ndquadrant.com writes:
 All in all, I'm happy with this and I'm considering committing it as
 soon as we agree on the set of columns.  I'm mildly on the side of
 removing the separate schema column and keeping name, so we'd have
 type/name/identity.

I would prefer that we keep the schema column, for easier per-schema
processing or filtering. It's way eaiser to provide it in a separate
column than to ask people to parse it back from the identity column.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improving avg performance for numeric

2013-03-20 Thread Pavel Stehule
Hello

2013/3/19 Tom Lane t...@sss.pgh.pa.us:
 I wrote:
 [ looks at patch... ]  Oh, I see what's affecting the plan: you changed
 the aggtranstypes to internal for a bunch of aggregates.  That's not
 very good, because right now the planner takes that to mean that the
 aggregate could eat a lot of space.  We don't want that to happen for
 these aggregates, I think.

 After thinking about that for awhile: if we pursue this type of
 optimization, what would probably be appropriate is to add an aggregate
 property (stored in pg_aggregate) that allows direct specification of
 the size that the planner should assume for the aggregate's transition
 value.  We were getting away with a hardwired assumption of 8K for
 internal because the existing aggregates that used that transtype all
 had similar properties, but it was always really a band-aid not a proper
 solution.  A per-aggregate override could be useful in other cases too.

 This was looking like 9.4 material already, but adding such a property
 would definitely put it over the top of what we could think about
 squeezing into 9.3, IMO.


Postgres is not a in memory OLAP database, but lot of companies use
it for OLAP queries due pg comfortable usage. This feature can be very
interesting for these users - and can introduce interesting speedup
with relative low price.

Regards

Pavel

 regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] A few string fixed

2013-03-20 Thread Daniele Varrazzo
Hello,

while translating the new PostgreSQL 9.3 strings I've found a couple
questionable. Patches attached.

Cheers,

-- Daniele


0001-Fixed-MultiXactIds-string-warning.patch
Description: Binary data


0002-Fixed-pasto-in-hint-string-about-making-views-deleta.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Enabling Checksums

2013-03-20 Thread Greg Stark
On Mon, Mar 18, 2013 at 5:52 PM, Bruce Momjian br...@momjian.us wrote:
 With a potential 10-20% overhead, I am unclear who would enable this at
 initdb time.

For what it's worth I think cpu overhead of the checksum is totally a
red herring.. Of course there's no reason not to optimize it to be as
fast as possible but if we say there's a 10% cpu overhead due to
calculating the checksum users will think that's perfectly reasonable
trade-off  and have no trouble looking at their cpu utilization and
deciding whether they have that overhead to spare. They can always buy
machines with more cores anyways.

Added I/O overhead, especially fsync latency is the performance impact
that I think we should be focusing on. Uses will be totally taken by
surprise to hear that checksums require I/O. And fsync latency to the
xlog is very very difficult to reduce. You can buy more hard drives
until the cows come home and the fsync latency will hardly change.
-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Enabling Checksums

2013-03-20 Thread Bruce Momjian
On Mon, Mar 18, 2013 at 01:52:58PM -0400, Bruce Momjian wrote:
 I assume a user would wait until they suspected corruption to turn it
 on, and because it is only initdb-enabled, they would have to
 dump/reload their cluster.  The open question is whether this is a
 usable feature as written, or whether we should wait until 9.4.
 
 pg_upgrade can't handle this because the old/new clusters would have the
 same catalog version number and the tablespace directory names would
 conflict.  Even if they are not using tablespaces, the old heap/index
 files would not have checksums and therefore would throw an error as
 soon as you accessed them.  In fact, this feature is going to need
 pg_upgrade changes to detect from pg_controldata that the old/new
 clusters have the same checksum setting.

A few more issues with pg_upgrade: if we ever decide to change the
checksum calculation in a later major release, pg_upgrade might not work
because of the checksum change but could still work for users who don't
use checksums.

Also, while I understand why we have to set the checksum option at
initdb time, it seems we could enable users to turn it off after initdb
--- is there any mechanism for this?

Also, if a users uses checksums in 9.3, could they initdb without
checksums in 9.4 and use pg_upgrade?  As coded, the pg_controldata
checksum settings would not match and pg_upgrade would throw an error,
but it might be possible to allow this, i.e. you could go from checksum
to no checksum initdb clusters, but not from no checksum to checksum.  I
am wondering if the patch should reflect this.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A few string fixed

2013-03-20 Thread Tom Lane
Daniele Varrazzo daniele.varra...@gmail.com writes:
 while translating the new PostgreSQL 9.3 strings I've found a couple
 questionable. Patches attached.

Hmm ... I agree with the MultiXactId-MultiXactIds changes, but not with
this one:

 -errhint(To make the view 
 updatable, provide an unconditional ON DELETE DO INSTEAD rule or an INSTEAD 
 OF DELETE trigger.)));
 +errhint(To make the view 
 deletable, provide an unconditional ON DELETE DO INSTEAD rule or an INSTEAD 
 OF DELETE trigger.)));

We use the phrase updatable view, we don't say deletable view
(and this usage is also found in the SQL standard).  We could possibly
make the message say To make the view updatable in this way, or
... for this purpose, but that seems a bit long-winded to me.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] machine-parseable object descriptions

2013-03-20 Thread Tom Lane
Alvaro Herrera alvhe...@2ndquadrant.com writes:
 The new identity column is amazingly verbose on things like pg_amproc entries:

  10650 | 1 (pg_catalog.point, pg_catalog.point) of pg_catalog.point_ops for 
 gist: 
 pg_catalog.gist_point_consistent(pg_catalog.internal,pg_catalog.point,integer,pg_catalog.oid,pg_catalog.internal)

Uh ... isn't that confusing the *identity* of the pg_amproc entry with
its *content*?  I would say that the function reference doesn't belong
there.  You do need the rest.  I would also suggest that you prepend
the word function (or operator for pg_amop), so that it reads like
function 1 (typename, typename) of opfamilyname for amname.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Materialized views vs event triggers missing docs?

2013-03-20 Thread Magnus Hagander
The table at 
http://www.postgresql.org/docs/devel/static/event-trigger-matrix.html
does not include things like CREATE MATERIALIZED VIEW or REFRESH
MATERIALIZED VIEW. but they certainly seem to work?

Just a missing doc patch, or is there something in the code that's not
behaving as intended?

If it's just a doc thing - perhaps this is a table we should somehow
try to autogenerate?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Problem with background worker

2013-03-20 Thread Marc Cousin
Hi,

I'm trying to write a background writer, and I'm facing a problem with
timestamps. The following code is where I'm having a problem (it's just a demo 
for
the problem):

BackgroundWorkerInitializeConnection(test, NULL);
while (!got_sigterm)
{
int ret;
/* Wait 1s */
ret = WaitLatch(MyProc-procLatch,
WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH,
1000L);
ResetLatch(MyProc-procLatch);
/* Insert dummy for now */
StartTransactionCommand();
SPI_connect();
PushActiveSnapshot(GetTransactionSnapshot());
ret = SPI_execute(INSERT INTO log VALUES 
(now(),statement_timestamp(),clock_timestamp()), false, 0);
SPI_finish();
PopActiveSnapshot();
CommitTransactionCommand();
}

\d log

   Column|   Type   | Modifiers
-+--+---
 now | timestamp with time zone | 
 statement_timestamp | timestamp with time zone | 
 clock_timestamp | timestamp with time zone |

Here is its content. Only the clock_timestamp is right. There are missing 
records at the
beginning because i truncated the table for this example.

  now  |  statement_timestamp  |
clock_timestamp
---+---+---
 2013-03-20 15:01:44.618623+01 | 2013-03-20 15:01:44.618623+01 | 2013-03-20 
15:01:52.77683+01
 2013-03-20 15:01:44.618623+01 | 2013-03-20 15:01:44.618623+01 | 2013-03-20 
15:01:53.784301+01
 2013-03-20 15:01:44.618623+01 | 2013-03-20 15:01:44.618623+01 | 2013-03-20 
15:01:54.834212+01
 2013-03-20 15:01:44.618623+01 | 2013-03-20 15:01:44.618623+01 | 2013-03-20 
15:01:55.848497+01
 2013-03-20 15:01:44.618623+01 | 2013-03-20 15:01:44.618623+01 | 2013-03-20 
15:01:56.872671+01
 2013-03-20 15:01:44.618623+01 | 2013-03-20 15:01:44.618623+01 | 2013-03-20 
15:01:57.888461+01
 2013-03-20 15:01:44.618623+01 | 2013-03-20 15:01:44.618623+01 | 2013-03-20 
15:01:58.912448+01
 2013-03-20 15:01:44.618623+01 | 2013-03-20 15:01:44.618623+01 | 2013-03-20 
15:01:59.936335+01
 2013-03-20 15:01:44.618623+01 | 2013-03-20 15:01:44.618623+01 | 2013-03-20 
15:02:00.951247+01
 2013-03-20 15:01:44.618623+01 | 2013-03-20 15:01:44.618623+01 | 2013-03-20 
15:02:01.967937+01
 2013-03-20 15:01:44.618623+01 | 2013-03-20 15:01:44.618623+01 | 2013-03-20 
15:02:02.983613+01


Most of the code is copy/paste from worker_spi (I don't really understand the 
PushActiveSnapshot(GetTransactionSnapshot()) and PopActiveSnapshot() but the 
behaviour is the same with or without them, and they were in worker_spi).

I guess I'm doing something wrong, but I really dont't know what it is.

Should I attach the whole code ?

Regards,

Marc


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: postgres_fdw vs data formatting GUCs (was Re: [HACKERS] [v9.3] writable foreign tables)

2013-03-20 Thread Tom Lane
Daniel Farina dan...@heroku.com writes:
 Okay, one more of those fridge-logic bugs.  Sorry for the noise. v5.

 A missing PG_RETHROW to get the properly finally-esque semantics:

 --- a/contrib/dblink/dblink.c
 +++ b/contrib/dblink/dblink.c
 @@ -642,7 +642,10 @@ dblink_fetch(PG_FUNCTION_ARGS)
   }
   PG_CATCH();
   {
 + /* Pop any set GUCs, if necessary */
   restoreLocalGucs(rgs);
 +
 + PG_RE_THROW();
   }
   PG_END_TRY();

Um ... you shouldn't need a PG_TRY for that at all.  guc.c will take
care of popping the values on transaction abort --- that's really rather
the whole point of having that mechanism.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with background worker

2013-03-20 Thread Alvaro Herrera
Marc Cousin escribió:
 Hi,
 
 I'm trying to write a background writer, and I'm facing a problem with
 timestamps. The following code is where I'm having a problem (it's just a 
 demo for
 the problem):
 
 BackgroundWorkerInitializeConnection(test, NULL);
 while (!got_sigterm)
 {
   int ret;
   /* Wait 1s */
   ret = WaitLatch(MyProc-procLatch,
   WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH,
   1000L);
   ResetLatch(MyProc-procLatch);
   /* Insert dummy for now */
   StartTransactionCommand();
   SPI_connect();
   PushActiveSnapshot(GetTransactionSnapshot());
   ret = SPI_execute(INSERT INTO log VALUES 
 (now(),statement_timestamp(),clock_timestamp()), false, 0);
   SPI_finish();
   PopActiveSnapshot();
   CommitTransactionCommand();
 }

Ah.  The reason for this problem is that the statement start time (which
also sets the transaction start time, when it's the first statement) is
set by postgres.c, not the transaction-control functions in xact.c.  So
you'd need to add a SetCurrentStatementStartTimestamp() call somewhere
in your loop.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Ignore invalid indexes in pg_dump

2013-03-20 Thread Josh Kupershmidt
On Wed, Mar 20, 2013 at 2:00 AM, Simon Riggs si...@2ndquadrant.com wrote:
 On 20 March 2013 02:51, Michael Paquier michael.paqu...@gmail.com wrote:

 If failures happen with CREATE INDEX CONCURRENTLY, the system will be let
 with invalid indexes. I don't think that the user would like to see invalid
 indexes of
 an existing system being recreated as valid after a restore.
 So why not removing from a dump invalid indexes with something like the
 patch
 attached?
 This should perhaps be applied in pg_dump for versions down to 8.2 where
 CREATE
 INDEX CONCURRENTLY has been implemented?

 Invalid also means currently-in-progress, so it would be better to keep them 
 in.

For invalid indexes which are left hanging around in the database, if
the index definition is included by pg_dump, it will likely cause pain
during the restore. If the index build failed the first time and
hasn't been manually dropped and recreated since then, it's a good bet
it will fail the next time. Errors during restore can be more than
just a nuisance; consider restores with --single-transaction.

And if the index is simply currently-in-progress, it seems like the
expected behavior would be for pg_dump to ignore it anyway. We don't
include other DDL objects which are not yet committed while pg_dump is
running.

Josh


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Ignore invalid indexes in pg_dump

2013-03-20 Thread Tom Lane
Josh Kupershmidt schmi...@gmail.com writes:
 On Wed, Mar 20, 2013 at 2:00 AM, Simon Riggs si...@2ndquadrant.com wrote:
 Invalid also means currently-in-progress, so it would be better to keep them 
 in.

 For invalid indexes which are left hanging around in the database, if
 the index definition is included by pg_dump, it will likely cause pain
 during the restore. If the index build failed the first time and
 hasn't been manually dropped and recreated since then, it's a good bet
 it will fail the next time. Errors during restore can be more than
 just a nuisance; consider restores with --single-transaction.

 And if the index is simply currently-in-progress, it seems like the
 expected behavior would be for pg_dump to ignore it anyway. We don't
 include other DDL objects which are not yet committed while pg_dump is
 running.

I had been on the fence about what to do here, but I find Josh's
arguments persuasive, particularly the second one.  Why shouldn't we
consider an in-progress index to be an uncommitted DDL change?

(Now admittedly, there won't *be* any uncommitted ordinary DDL on tables
while pg_dump is running, because it takes AccessShareLock on all
tables.  But there could easily be uncommitted DDL against other types
of database objects, which pg_dump won't even see.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-advocacy] Call for Google Summer of Code mentors, admins

2013-03-20 Thread Thom Brown
On 19 March 2013 17:42, Thom Brown t...@linux.com wrote:
 On 14 February 2013 18:02, Josh Berkus j...@agliodbs.com wrote:
 Folks,

 Once again, Google is holding Summer of Code.  We need to assess whether
 we want to participate this year.

 Questions:

 - Who wants to mentor for GSOC?

 - Who can admin for GSOC?  Thom?

 - Please suggest project ideas for GSOC

 - Students seeing this -- please speak up if you have projects you plan
 to submit.

 If anyone else has more projects ideas to suggest, please do share.
 Students, please feel free to review the PostgreSQL Todo list for
 inspiration: http://wiki.postgresql.org/wiki/Todo  Of course ensure
 you don't choose anything too ambitious or trivial.

Okay, here's a random idea (which could be infeasible and/or
undesirable).  How about a way to internally schedule tasks using a
background worker process (introduced in 9.2) to wake on each tick and
run tasks?

So:

CREATE EXTENSION pg_scheduler;
--
schedule_task(task_command, task_priority, task_start, repeat_interval);

SELECT schedule_task('REINDEX my_table', 1, '2012-03-20
00:10:00'::timestamp, '1 week'::interval);

SELECT list_tasks();

-[ RECORD 1 ]---+---
task_id | 1
task_command| REINDEX my_table
task_priority   | 1
task_start  | 2012-03-20 00:10:00-04
repeat_interval | 7 days
owner   | postgres

SELECT delete_task(1);

Tasks would be run in sequence if they share the same scheduled time
ordered by priority descending, beyond which it would be
non-deterministic.  Or perhaps additional worker processes to fire
commands in parallel if necessary.

Disclaimer: I haven't really thought this through.

--
Thom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade segfaults when given an invalid PGSERVICE value

2013-03-20 Thread Bruce Momjian
On Tue, Mar 19, 2013 at 10:12:21AM -0400, Steve Singer wrote:
 so it is clearly possible for PQconndefaults() to return NULL for
 service file failures.  The questions are:
 
 *  Is this what we want?
 
 What other choices do we have? I don't think PQconndefaults() should
 continue on as if PGSERVICE wasn't set in the environment after a
 failure from parseServiceInfo.

True.  Ignoring a service specification seems wrong, and issuing a
warning message weak.

 *  Should we document this?
 
 Yes the documentation should indicate that PQconndefaults() can
 return NULL for more than just memory failures.

The attached patch fixes this.  I am unclear about backpatching this as
it hits lot of code, is rare, and adds new translation strings.  On the
other hand, it does crash the applications.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
new file mode 100644
index bba0d2a..eca061f
*** a/contrib/dblink/dblink.c
--- b/contrib/dblink/dblink.c
*** dblink_fdw_validator(PG_FUNCTION_ARGS)
*** 1947,1953 
  		if (!options)			/* assume reason for failure is OOM */
  			ereport(ERROR,
  	(errcode(ERRCODE_FDW_OUT_OF_MEMORY),
! 	 errmsg(out of memory),
  	 errdetail(could not get libpq's default connection options)));
  	}
  
--- 1947,1953 
  		if (!options)			/* assume reason for failure is OOM */
  			ereport(ERROR,
  	(errcode(ERRCODE_FDW_OUT_OF_MEMORY),
! 	 errmsg(out of memory or service name cannot be found),
  	 errdetail(could not get libpq's default connection options)));
  	}
  
diff --git a/contrib/pg_upgrade/server.c b/contrib/pg_upgrade/server.c
new file mode 100644
index ed67759..cd246ce
*** a/contrib/pg_upgrade/server.c
--- b/contrib/pg_upgrade/server.c
*** check_pghost_envvar(void)
*** 300,306 
  	/* Get valid libpq env vars from the PQconndefaults function */
  
  	start = PQconndefaults();
! 
  	for (option = start; option-keyword != NULL; option++)
  	{
  		if (option-envvar  (strcmp(option-envvar, PGHOST) == 0 ||
--- 300,310 
  	/* Get valid libpq env vars from the PQconndefaults function */
  
  	start = PQconndefaults();
! 	if (!start)
! 		pg_log(PG_FATAL,
! 			   out of memory or service name cannot be found\n
! 			   could not get libpq's default connection options\n);
! 	
  	for (option = start; option-keyword != NULL; option++)
  	{
  		if (option-envvar  (strcmp(option-envvar, PGHOST) == 0 ||
diff --git a/contrib/postgres_fdw/option.c b/contrib/postgres_fdw/option.c
new file mode 100644
index 123cb4f..d4890be
*** a/contrib/postgres_fdw/option.c
--- b/contrib/postgres_fdw/option.c
*** InitPgFdwOptions(void)
*** 169,175 
  	if (!libpq_options)			/* assume reason for failure is OOM */
  		ereport(ERROR,
  (errcode(ERRCODE_FDW_OUT_OF_MEMORY),
!  errmsg(out of memory),
  			 errdetail(could not get libpq's default connection options)));
  
  	/* Count how many libpq options are available. */
--- 169,175 
  	if (!libpq_options)			/* assume reason for failure is OOM */
  		ereport(ERROR,
  (errcode(ERRCODE_FDW_OUT_OF_MEMORY),
!  errmsg(out of memory or service name cannot be found),
  			 errdetail(could not get libpq's default connection options)));
  
  	/* Count how many libpq options are available. */
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
new file mode 100644
index 775d250..ceb873e
*** a/doc/src/sgml/libpq.sgml
--- b/doc/src/sgml/libpq.sgml
*** typedef struct
*** 481,487 
 current default values.  The return value points to an array of
 structnamePQconninfoOption/structname structures, which ends
 with an entry having a null structfieldkeyword/ pointer.  The
!null pointer is returned if memory could not be allocated. Note that
 the current default values (structfieldval/structfield fields)
 will depend on environment variables and other context.  Callers
 must treat the connection options data as read-only.
--- 481,488 
 current default values.  The return value points to an array of
 structnamePQconninfoOption/structname structures, which ends
 with an entry having a null structfieldkeyword/ pointer.  The
!null pointer is returned if memory could not be allocated or
!the specified connection service name cannot be found. Note that
 the current default values (structfieldval/structfield fields)
 will depend on environment variables and other context.  Callers
 must treat the connection options data as read-only.
diff --git a/src/bin/scripts/pg_isready.c b/src/bin/scripts/pg_isready.c
new file mode 100644
index 4ba257d..f02f4f9
*** a/src/bin/scripts/pg_isready.c
--- b/src/bin/scripts/pg_isready.c
*** main(int argc, char **argv)
*** 

Re: [HACKERS] Problem with background worker

2013-03-20 Thread Marc Cousin

On 20/03/2013 16:33, Alvaro Herrera wrote:

Marc Cousin escribió:

Hi,

I'm trying to write a background writer, and I'm facing a problem with
timestamps. The following code is where I'm having a problem (it's just a demo 
for
the problem):

BackgroundWorkerInitializeConnection(test, NULL);
while (!got_sigterm)
{
int ret;
/* Wait 1s */
ret = WaitLatch(MyProc-procLatch,
WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH,
1000L);
ResetLatch(MyProc-procLatch);
/* Insert dummy for now */
StartTransactionCommand();
SPI_connect();
PushActiveSnapshot(GetTransactionSnapshot());
ret = SPI_execute(INSERT INTO log VALUES 
(now(),statement_timestamp(),clock_timestamp()), false, 0);
SPI_finish();
PopActiveSnapshot();
CommitTransactionCommand();
}


Ah.  The reason for this problem is that the statement start time (which
also sets the transaction start time, when it's the first statement) is
set by postgres.c, not the transaction-control functions in xact.c.  So
you'd need to add a SetCurrentStatementStartTimestamp() call somewhere
in your loop.



Yes, that works. Thanks a lot !

Maybe this should be added to the worker_spi example ?

Regards


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with background worker

2013-03-20 Thread Alvaro Herrera
Marc Cousin escribió:
 On 20/03/2013 16:33, Alvaro Herrera wrote:

 Ah.  The reason for this problem is that the statement start time (which
 also sets the transaction start time, when it's the first statement) is
 set by postgres.c, not the transaction-control functions in xact.c.  So
 you'd need to add a SetCurrentStatementStartTimestamp() call somewhere
 in your loop.
 
 Yes, that works. Thanks a lot !
 
 Maybe this should be added to the worker_spi example ?

Yeah, I think I need to go over the postgres.c code and figure out what
else needs to be called.  I have a pending patch from Guillaume to
improve worker_spi some more; I'll add this bit too.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade segfaults when given an invalid PGSERVICE value

2013-03-20 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 On Tue, Mar 19, 2013 at 10:12:21AM -0400, Steve Singer wrote:
 *  Should we document this?

 Yes the documentation should indicate that PQconndefaults() can
 return NULL for more than just memory failures.

 The attached patch fixes this.  I am unclear about backpatching this as
 it hits lot of code, is rare, and adds new translation strings.  On the
 other hand, it does crash the applications.

I don't particularly care for hard-wiring knowledge that bad service
name is the only other reason for PQconndefaults to fail (even assuming
that that's true, a point not in evidence ATM).  I care even less for
continuing to use ERRCODE_FDW_OUT_OF_MEMORY for errors clearly outside
its meaning.

I think we should either change PQconndefaults to *not* fail in this
circumstance, or find a way to return an error message.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A few string fixed

2013-03-20 Thread Daniele Varrazzo
On Wed, Mar 20, 2013 at 2:01 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Daniele Varrazzo daniele.varra...@gmail.com writes:
 while translating the new PostgreSQL 9.3 strings I've found a couple
 questionable. Patches attached.

 Hmm ... I agree with the MultiXactId-MultiXactIds changes, but not with
 this one:

 -errhint(To make the view 
 updatable, provide an unconditional ON DELETE DO INSTEAD rule or an INSTEAD 
 OF DELETE trigger.)));
 +errhint(To make the view 
 deletable, provide an unconditional ON DELETE DO INSTEAD rule or an INSTEAD 
 OF DELETE trigger.)));

 We use the phrase updatable view, we don't say deletable view
 (and this usage is also found in the SQL standard).  We could possibly
 make the message say To make the view updatable in this way, or
 ... for this purpose, but that seems a bit long-winded to me.

Ok, I'd just thought it was a pasto.

-- Daniele


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improving avg performance for numeric

2013-03-20 Thread Hadi Moshayedi
Hi Tom,

Tom Lane t...@sss.pgh.pa.us wrote:
 After thinking about that for awhile: if we pursue this type of
 optimization, what would probably be appropriate is to add an aggregate
 property (stored in pg_aggregate) that allows direct specification of
 the size that the planner should assume for the aggregate's transition
 value.  We were getting away with a hardwired assumption of 8K for
 internal because the existing aggregates that used that transtype all
 had similar properties, but it was always really a band-aid not a proper
 solution.  A per-aggregate override could be useful in other cases too.

Cool.

I created a patch which adds an aggregate property to pg_aggregate, so
the transition space is can be overridden. This patch doesn't contain
the numeric optimizations. It uses 0 (meaning not-set) for all
existing aggregates.

I manual-tested it a bit, by changing this value for aggregates and
observing the changes in plan. I also updated some docs and pg_dump.

Does this look like something along the lines of what you meant?

Thanks,
  -- Hadi


aggregate-transspace.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: postgres_fdw vs data formatting GUCs (was Re: [HACKERS] [v9.3] writable foreign tables)

2013-03-20 Thread Daniel Farina
On Wed, Mar 20, 2013 at 7:43 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Daniel Farina dan...@heroku.com writes:
 Okay, one more of those fridge-logic bugs.  Sorry for the noise. v5.

 A missing PG_RETHROW to get the properly finally-esque semantics:

 --- a/contrib/dblink/dblink.c
 +++ b/contrib/dblink/dblink.c
 @@ -642,7 +642,10 @@ dblink_fetch(PG_FUNCTION_ARGS)
   }
   PG_CATCH();
   {
 + /* Pop any set GUCs, if necessary */
   restoreLocalGucs(rgs);
 +
 + PG_RE_THROW();
   }
   PG_END_TRY();

 Um ... you shouldn't need a PG_TRY for that at all.  guc.c will take
 care of popping the values on transaction abort --- that's really rather
 the whole point of having that mechanism.

Hmm, well, merely raising the error doesn't reset the GUCs, so I was
rather thinking that this was a good idea to compose more neatly in
the case of nested exception processing, e.g.:

PG_TRY();
{
PG_TRY();
{
elog(NOTICE, pre: %s,
 GetConfigOption(DateStyle, false, true));
materializeResult(fcinfo, res);
}
PG_CATCH();
{
elog(NOTICE, inner catch: %s,
 GetConfigOption(DateStyle, false, true));
PG_RE_THROW();
}
PG_END_TRY();
}
PG_CATCH();
{
elog(NOTICE, outer catch: %s,
 GetConfigOption(DateStyle, false, true));
restoreLocalGucs(rgs);
elog(NOTICE, restored: %s,
 GetConfigOption(DateStyle, false, true));
PG_RE_THROW();
}
PG_END_TRY();

I don't think this paranoia is made in other call sites for AtEOXact,
so included is a version that takes the same stance. This one shows up
 best with the whitespace-insensitive option for git:

--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -634,21 +634,8 @@ dblink_fetch(PG_FUNCTION_ARGS)
 * affect parsing and then un-set them afterwards.
 */
initRemoteGucs(rgs, conn);
-
-   PG_TRY();
-   {
applyRemoteGucs(rgs);
materializeResult(fcinfo, res);
-   }
-   PG_CATCH();
-   {
-   /* Pop any set GUCs, if necessary */
-   restoreLocalGucs(rgs);
-
-   PG_RE_THROW();
-   }
-   PG_END_TRY();
-
restoreLocalGucs(rgs);

return (Datum) 0;
@@ -823,9 +810,6 @@ dblink_record_internal(FunctionCallInfo fcinfo,
bool is_async)
if (freeconn)
PQfinish(conn);

-   /* Pop any set GUCs, if necessary */
-   restoreLocalGucs(rgs);
-
PG_RE_THROW();
}
PG_END_TRY();

--
fdr


dblink-guc-sensitive-types-v6.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade segfaults when given an invalid PGSERVICE value

2013-03-20 Thread Bruce Momjian
On Wed, Mar 20, 2013 at 12:30:32PM -0400, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  On Tue, Mar 19, 2013 at 10:12:21AM -0400, Steve Singer wrote:
  *  Should we document this?
 
  Yes the documentation should indicate that PQconndefaults() can
  return NULL for more than just memory failures.
 
  The attached patch fixes this.  I am unclear about backpatching this as
  it hits lot of code, is rare, and adds new translation strings.  On the
  other hand, it does crash the applications.
 
 I don't particularly care for hard-wiring knowledge that bad service
 name is the only other reason for PQconndefaults to fail (even assuming
 that that's true, a point not in evidence ATM).  I care even less for
 continuing to use ERRCODE_FDW_OUT_OF_MEMORY for errors clearly outside
 its meaning.

Yes, overloading that error code was bad.

 I think we should either change PQconndefaults to *not* fail in this
 circumstance, or find a way to return an error message.

Well, Steve Singer didn't like the idea of ignoring a service lookup
failure.  What do others think?  We can throw a warning, but there is no
way to know if the application allows the user to see it.

Adding a way to communicate the service failure reason to the user would
require a libpq API change, unless we go crazy and say -1 means service
failure, and assume -1 can't be a valid pointer.

Perhaps we need to remove the memory references and just report a
failure, and mention services as a possible cause.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-advocacy] Call for Google Summer of Code mentors, admins

2013-03-20 Thread Atri Sharma
It does sound nice,something like cron?

We can use a scheduling algorithm, and can define a pool of tasks as well as a 
time constraint for the amount of time which can be used for running the 
tasks.Then, a scheduling algorithm can pick tasks from the pool based on 
priorities and the time duration of a task.I can see a dynamic programming 
solution to this problem.

Atri

Sent from my iPad

On 20-Mar-2013, at 21:33, Thom Brown t...@linux.com wrote:

 On 19 March 2013 17:42, Thom Brown t...@linux.com wrote:
 On 14 February 2013 18:02, Josh Berkus j...@agliodbs.com wrote:
 Folks,
 
 Once again, Google is holding Summer of Code.  We need to assess whether
 we want to participate this year.
 
 Questions:
 
 - Who wants to mentor for GSOC?
 
 - Who can admin for GSOC?  Thom?
 
 - Please suggest project ideas for GSOC
 
 - Students seeing this -- please speak up if you have projects you plan
 to submit.
 
 If anyone else has more projects ideas to suggest, please do share.
 Students, please feel free to review the PostgreSQL Todo list for
 inspiration: http://wiki.postgresql.org/wiki/Todo  Of course ensure
 you don't choose anything too ambitious or trivial.
 
 Okay, here's a random idea (which could be infeasible and/or
 undesirable).  How about a way to internally schedule tasks using a
 background worker process (introduced in 9.2) to wake on each tick and
 run tasks?
 
 So:
 
 CREATE EXTENSION pg_scheduler;
 --
 schedule_task(task_command, task_priority, task_start, repeat_interval);
 
 SELECT schedule_task('REINDEX my_table', 1, '2012-03-20
 00:10:00'::timestamp, '1 week'::interval);
 
 SELECT list_tasks();
 
 -[ RECORD 1 ]---+---
 task_id | 1
 task_command| REINDEX my_table
 task_priority   | 1
 task_start  | 2012-03-20 00:10:00-04
 repeat_interval | 7 days
 owner   | postgres
 
 SELECT delete_task(1);
 
 Tasks would be run in sequence if they share the same scheduled time
 ordered by priority descending, beyond which it would be
 non-deterministic.  Or perhaps additional worker processes to fire
 commands in parallel if necessary.
 
 Disclaimer: I haven't really thought this through.
 
 --
 Thom
 
 
 -- 
 Sent via pgsql-advocacy mailing list (pgsql-advoc...@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-advocacy


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: postgres_fdw vs data formatting GUCs (was Re: [HACKERS] [v9.3] writable foreign tables)

2013-03-20 Thread Tom Lane
Daniel Farina dan...@heroku.com writes:
 On Wed, Mar 20, 2013 at 7:43 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Um ... you shouldn't need a PG_TRY for that at all.  guc.c will take
 care of popping the values on transaction abort --- that's really rather
 the whole point of having that mechanism.

 Hmm, well, merely raising the error doesn't reset the GUCs, so I was
 rather thinking that this was a good idea to compose more neatly in
 the case of nested exception processing, e.g.:

In general, we don't allow processing to resume after an error until
transaction or subtransaction abort cleanup has been done.  It's true
that if you look at the GUC state in a PG_CATCH block, you'll see it
hasn't been reset yet, but that's not very relevant.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade segfaults when given an invalid PGSERVICE value

2013-03-20 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 On Wed, Mar 20, 2013 at 12:30:32PM -0400, Tom Lane wrote:
 I think we should either change PQconndefaults to *not* fail in this
 circumstance, or find a way to return an error message.

 Well, Steve Singer didn't like the idea of ignoring a service lookup
 failure.  What do others think?  We can throw a warning, but there is no
 way to know if the application allows the user to see it.

Short of changing PQconndefaults's API, it seems like the only
reasonable answer is to not fail *in the context of PQconndefaults*.
We could still fail for bad service name in a real connection operation
(where there is an opportunity to return an error message).

While this surely isn't the nicest answer, it doesn't seem totally
unreasonable to me.  A bad service name indeed does not contribute
anything to the set of defaults available.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Let's invent a function to report lock-wait-blocking PIDs

2013-03-20 Thread Tom Lane
I was just looking into why the -DCLOBBER_CACHE_ALWAYS buildfarm
critters aren't managing to run the new timeouts isolation test
successfully, despite very generous timeouts.  The answer is that
2 seconds isn't quite enough time to parse+plan+execute the query
that isolationtester uses to see if the current test session is
blocked on a lock, if CLOBBER_CACHE_ALWAYS is on.  Now, that query
is totally horrible:

appendPQExpBufferStr(wait_query,
 SELECT 1 FROM pg_locks holder, pg_locks waiter 
 WHERE NOT waiter.granted AND waiter.pid = $1 
 AND holder.granted 
 AND holder.pid  $1 AND holder.pid IN ();
/* The spec syntax requires at least one session; assume that here. */
appendPQExpBuffer(wait_query, %s, backend_pids[1]);
for (i = 2; i  nconns; i++)
appendPQExpBuffer(wait_query, , %s, backend_pids[i]);
appendPQExpBufferStr(wait_query,
 ) 

 AND holder.mode = ANY (CASE waiter.mode 
 WHEN 'AccessShareLock' THEN ARRAY[
 'AccessExclusiveLock'] 
 WHEN 'RowShareLock' THEN ARRAY[
 'ExclusiveLock',
 'AccessExclusiveLock'] 
 WHEN 'RowExclusiveLock' THEN ARRAY[
 'ShareLock',
 'ShareRowExclusiveLock',
 'ExclusiveLock',
 'AccessExclusiveLock'] 
 WHEN 'ShareUpdateExclusiveLock' THEN ARRAY[
 'ShareUpdateExclusiveLock',
 'ShareLock',
 'ShareRowExclusiveLock',
 'ExclusiveLock',
 'AccessExclusiveLock'] 
 WHEN 'ShareLock' THEN ARRAY[
 'RowExclusiveLock',
 'ShareUpdateExclusiveLock',
 'ShareRowExclusiveLock',
 'ExclusiveLock',
 'AccessExclusiveLock'] 
 WHEN 'ShareRowExclusiveLock' THEN ARRAY[
 'RowExclusiveLock',
 'ShareUpdateExclusiveLock',
 'ShareLock',
 'ShareRowExclusiveLock',
 'ExclusiveLock',
 'AccessExclusiveLock'] 
 WHEN 'ExclusiveLock' THEN ARRAY[
 'RowShareLock',
 'RowExclusiveLock',
 'ShareUpdateExclusiveLock',
 'ShareLock',
 'ShareRowExclusiveLock',
 'ExclusiveLock',
 'AccessExclusiveLock'] 
 WHEN 'AccessExclusiveLock' THEN ARRAY[
 'AccessShareLock',
 'RowShareLock',
 'RowExclusiveLock',
 'ShareUpdateExclusiveLock',
 'ShareLock',
 'ShareRowExclusiveLock',
 'ExclusiveLock',
 'AccessExclusiveLock'] END) 

  AND holder.locktype IS NOT DISTINCT FROM waiter.locktype 
  AND holder.database IS NOT DISTINCT FROM waiter.database 
  AND holder.relation IS NOT DISTINCT FROM waiter.relation 
 AND holder.page IS NOT DISTINCT FROM waiter.page 
 AND holder.tuple IS NOT DISTINCT FROM waiter.tuple 
  AND holder.virtualxid IS NOT DISTINCT FROM waiter.virtualxid 
AND holder.transactionid IS NOT DISTINCT FROM waiter.transactionid 
AND holder.classid IS NOT DISTINCT FROM waiter.classid 
 AND holder.objid IS NOT DISTINCT FROM waiter.objid 
AND holder.objsubid IS NOT DISTINCT FROM waiter.objsubid );

This is way more knowledge than we (should) want a client to embed about
which lock types block which others.  What's worse, it's still wrong.
The query will find cases where one of the test sessions *directly*
blocks another one, but not cases where the blockage is indirect.
For example, consider that A holds AccessShareLock, B is waiting for
AccessExclusiveLock on the same object, and C is queued up behind B
for another AccessShareLock.  This query will not think that C is
blocked, not even if B is part of the set of sessions of interest
(because B will show the lock as not granted); but especially so if
B is not part of the set.

I think that such situations may not arise in the specific context that
isolationtester says it's worried about, which is to disregard waits for
locks held by autovacuum.  But in general, you can't reliably tell who's
blocking whom with a query like this.

If isolationtester were the only 

Re: [HACKERS] pg_upgrade segfaults when given an invalid PGSERVICE value

2013-03-20 Thread Bruce Momjian
On Wed, Mar 20, 2013 at 01:30:20PM -0400, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  On Wed, Mar 20, 2013 at 12:30:32PM -0400, Tom Lane wrote:
  I think we should either change PQconndefaults to *not* fail in this
  circumstance, or find a way to return an error message.
 
  Well, Steve Singer didn't like the idea of ignoring a service lookup
  failure.  What do others think?  We can throw a warning, but there is no
  way to know if the application allows the user to see it.
 
 Short of changing PQconndefaults's API, it seems like the only
 reasonable answer is to not fail *in the context of PQconndefaults*.
 We could still fail for bad service name in a real connection operation
 (where there is an opportunity to return an error message).

Yes, that is _a_ plan.

 While this surely isn't the nicest answer, it doesn't seem totally
 unreasonable to me.  A bad service name indeed does not contribute
 anything to the set of defaults available.

I think the concern is that the services file could easily change the
defaults that are used for connecting, though you could argue that the
real defaults for a bad service entry are properly returned.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Materialized view assertion failure in HEAD

2013-03-20 Thread Robert Haas
On Mon, Mar 18, 2013 at 4:42 PM, Kevin Grittner kgri...@ymail.com wrote:
 I want to give everyone else a chance to weigh in before I start
 the pendulum swinging back in the other direction on OIDs in MVs.
 Opinions?

I agree that it's probably better to just disallow this, but I have to
admit I don't like your proposed patch much.  It seems to me that the
right place to fix this is in interpretOidsOption(), by returning
false rather than default_with_oids whenever the relation is a
materialized view.  That would require passing the relkind as an
additional argument to interpretOidsOption(), but that doesn't seem
problematic.

Then, the parser could just error out if OIDS=anything appears in the
options list, but it wouldn't need to actually kludge the options list
as you've done here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Let's invent a function to report lock-wait-blocking PIDs

2013-03-20 Thread Robert Haas
On Wed, Mar 20, 2013 at 2:02 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I was just looking into why the -DCLOBBER_CACHE_ALWAYS buildfarm
 critters aren't managing to run the new timeouts isolation test
 successfully, despite very generous timeouts.  The answer is that
 2 seconds isn't quite enough time to parse+plan+execute the query
 that isolationtester uses to see if the current test session is
 blocked on a lock, if CLOBBER_CACHE_ALWAYS is on.  Now, that query
 is totally horrible:

 appendPQExpBufferStr(wait_query,
  SELECT 1 FROM pg_locks holder, pg_locks waiter 
  WHERE NOT waiter.granted AND waiter.pid = $1 
  AND holder.granted 
  AND holder.pid  $1 AND holder.pid IN ();
 /* The spec syntax requires at least one session; assume that here. */
 appendPQExpBuffer(wait_query, %s, backend_pids[1]);
 for (i = 2; i  nconns; i++)
 appendPQExpBuffer(wait_query, , %s, backend_pids[i]);
 appendPQExpBufferStr(wait_query,
  ) 

  AND holder.mode = ANY (CASE waiter.mode 
  WHEN 'AccessShareLock' THEN ARRAY[
  'AccessExclusiveLock'] 
  WHEN 'RowShareLock' THEN ARRAY[
  'ExclusiveLock',
  'AccessExclusiveLock'] 
  WHEN 'RowExclusiveLock' THEN ARRAY[
  'ShareLock',
  'ShareRowExclusiveLock',
  'ExclusiveLock',
  'AccessExclusiveLock'] 
  WHEN 'ShareUpdateExclusiveLock' THEN ARRAY[
  'ShareUpdateExclusiveLock',
  'ShareLock',
  'ShareRowExclusiveLock',
  'ExclusiveLock',
  'AccessExclusiveLock'] 
  WHEN 'ShareLock' THEN ARRAY[
  'RowExclusiveLock',
  'ShareUpdateExclusiveLock',
  'ShareRowExclusiveLock',
  'ExclusiveLock',
  'AccessExclusiveLock'] 
  WHEN 'ShareRowExclusiveLock' THEN ARRAY[
  'RowExclusiveLock',
  'ShareUpdateExclusiveLock',
  'ShareLock',
  'ShareRowExclusiveLock',
  'ExclusiveLock',
  'AccessExclusiveLock'] 
  WHEN 'ExclusiveLock' THEN ARRAY[
  'RowShareLock',
  'RowExclusiveLock',
  'ShareUpdateExclusiveLock',
  'ShareLock',
  'ShareRowExclusiveLock',
  'ExclusiveLock',
  'AccessExclusiveLock'] 
  WHEN 'AccessExclusiveLock' THEN ARRAY[
  'AccessShareLock',
  'RowShareLock',
  'RowExclusiveLock',
  'ShareUpdateExclusiveLock',
  'ShareLock',
  'ShareRowExclusiveLock',
  'ExclusiveLock',
  'AccessExclusiveLock'] END) 

   AND holder.locktype IS NOT DISTINCT FROM waiter.locktype 
   AND holder.database IS NOT DISTINCT FROM waiter.database 
   AND holder.relation IS NOT DISTINCT FROM waiter.relation 
  AND holder.page IS NOT DISTINCT FROM waiter.page 
  AND holder.tuple IS NOT DISTINCT FROM waiter.tuple 
   AND holder.virtualxid IS NOT DISTINCT FROM waiter.virtualxid 
 AND holder.transactionid IS NOT DISTINCT FROM waiter.transactionid 
 AND holder.classid IS NOT DISTINCT FROM waiter.classid 
  AND holder.objid IS NOT DISTINCT FROM waiter.objid 
 AND holder.objsubid IS NOT DISTINCT FROM waiter.objsubid );

 This is way more knowledge than we (should) want a client to embed about
 which lock types block which others.  What's worse, it's still wrong.
 The query will find cases where one of the test sessions *directly*
 blocks another one, but not cases where the blockage is indirect.
 For example, consider that A holds AccessShareLock, B is waiting for
 AccessExclusiveLock on the same object, and C is queued up behind B
 for another AccessShareLock.  This query will not think that C is
 blocked, not even if B is part of the set of sessions of interest
 (because B will show the lock as not granted); but especially so if
 B is not part of the set.

 I think that such situations may not arise in the specific context that
 isolationtester says it's worried about, which is to disregard 

Re: [HACKERS] Let's invent a function to report lock-wait-blocking PIDs

2013-03-20 Thread Alvaro Herrera
Robert Haas escribió:
 On Wed, Mar 20, 2013 at 2:02 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  I was just looking into why the -DCLOBBER_CACHE_ALWAYS buildfarm
  critters aren't managing to run the new timeouts isolation test
  successfully, despite very generous timeouts.  The answer is that
  2 seconds isn't quite enough time to parse+plan+execute the query
  that isolationtester uses to see if the current test session is
  blocked on a lock, if CLOBBER_CACHE_ALWAYS is on.  Now, that query
  is totally horrible:

  In the isolationtester use-case, we'd get the right answer by testing
  whether this function's result has any overlap with the set of PIDs of
  test sessions, ie
 
  select pg_blocking_pids($1)  array[pid1, pid2, pid3, ...]
 
 Sounds excellent.

Yeah, I have looked at that query a couple of times wondering how it
could be improved and came up blank.  Glad you had a reason to be in the
area.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] machine-parseable object descriptions

2013-03-20 Thread Tom Lane
Alvaro Herrera alvhe...@2ndquadrant.com writes:
 One change I made was to move all the new code from dependency.c into
 objectaddress.c.  The only reason it was in dependency.c was that
 getObjectDescription was there in the first place; but it doesn't seem
 to me that it really belongs there.  (Back when it was first created,
 there was no objectaddress.c at all, and dependency.c was the only user
 of it.)  If there were no backpatching considerations, I would suggest
 we move getObjectDescription() to objectaddress.c as well, but I'm not
 sure it's worth the trouble, but I'm not wedded to that if somebody
 thinks both things should be kept together.

+1 for moving getObjectDescription to objectaddress.c.  As you say,
that's probably where it would've been if that file had existed at
the time.  I don't recall that we've had to back-patch many changes
in that function, so I don't think that concern is major.

 Finally: it'd be nice to be able to get pg_am identities with these
 functions too.  Then you could use a simple query to get object
 identities + descriptions from pg_description (right now you have to
 exclude that catalog specifically, otherwise the query bombs out).  But
 it'd be a lot of trouble, and since these objects are not really
 pluggable, I'm not bothering.  We can always add it later if there's
 more interesting use for it.

I think that would be a good thing to add, but no objection to leaving
it for a follow-on patch.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Let's invent a function to report lock-wait-blocking PIDs

2013-03-20 Thread Greg Stark
On Wed, Mar 20, 2013 at 6:02 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I propose that we should add a backend function that simplifies this
 type of query.  The API that comes to mind is (name subject to
 bikeshedding)

 pg_blocking_pids(pid int) returns int[]

I've wanted to use pg_locks as a demonstration for recursive queries
many times and ran into the same problem. It's just too hard to figure
out which lock holders would be blocking which other locks.

I would like to be able to generate the full graph showing indirect
blocking. This seems to be not quite powerful enough to do it though.
I would have expected something that took whole pg_lock row values or
something like that.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Materialized view assertion failure in HEAD

2013-03-20 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote:
 Kevin Grittner kgri...@ymail.com wrote:
 I want to give everyone else a chance to weigh in before I start
 the pendulum swinging back in the other direction on OIDs in MVs.
 Opinions?

 I agree that it's probably better to just disallow this, but I have to
 admit I don't like your proposed patch much.  It seems to me that the
 right place to fix this is in interpretOidsOption(), by returning
 false rather than default_with_oids whenever the relation is a
 materialized view.  That would require passing the relkind as an
 additional argument to interpretOidsOption(), but that doesn't seem
 problematic.

I like it.  It seems to be less of a modularity violation than my
suggestion, and if we're going to have kluges for oids, we might as
well keep them together rather than scattering them around.

 Then, the parser could just error out if OIDS=anything appears in the
 options list, but it wouldn't need to actually kludge the options list
 as you've done here.

Right.

Any other comments?

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Let's invent a function to report lock-wait-blocking PIDs

2013-03-20 Thread Tom Lane
Greg Stark st...@mit.edu writes:
 On Wed, Mar 20, 2013 at 6:02 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I propose that we should add a backend function that simplifies this
 type of query.  The API that comes to mind is (name subject to
 bikeshedding)
 
 pg_blocking_pids(pid int) returns int[]

 I've wanted to use pg_locks as a demonstration for recursive queries
 many times and ran into the same problem. It's just too hard to figure
 out which lock holders would be blocking which other locks.

 I would like to be able to generate the full graph showing indirect
 blocking. This seems to be not quite powerful enough to do it though.
 I would have expected something that took whole pg_lock row values or
 something like that.

I wanted to write the function so it would inspect the lock data
structures directly rather than reconstruct them from pg_locks output;
coercing those back from text to internal form and matching up the lock
identities is a very large part of the inefficiency of the
isolationtester query.  Moreover, the pg_locks output fails to capture
lock queue ordering at all, I believe, so the necessary info just isn't
there for determining who's blocking whom in the case of conflicting
ungranted requests.

Now a disadvantage of that approach is that successive calls to the
function won't necessarily see the same state.  So if we wanted to break
down the results into direct and indirect blockers, we couldn't do that
with separate functions; we'd have to think of some representation that
captures all the info in a single function's output.

Also, I intentionally proposed that this just return info relevant to a
single process, in hopes that that would make it cheap enough that we
could do the calculations while holding the lock data structure LWLocks.
(Not having written the code yet, I'm not totally sure that will fly.)
If we want a global view of the who-blocks-whom situation, I think we'll
need another approach.  But since this way solves isolationtester's
problem fairly neatly, I was hopeful that it would be useful for other
apps too.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] find libxml2 using pkg-config

2013-03-20 Thread Peter Eisentraut
On 3/4/13 1:36 PM, Noah Misch wrote:
 Do you have in mind a target system exhibiting a problem?  CentOS 6 ships a
 single xml2-config, but its --cflags --libs output is the same regardless of
 the installed combination of libxml2-dev packages.  Ubuntu 13.04 does not ship
 32-bit libxml2, so it avoids the question.

It does, because you can just install the libxml2 package from the
32-bit distribution.  (So there will no longer be packages in the 64-bit
distribution that actually contain 32-bit code, at least in the long run.)

But pack to the main question:  Stock systems probably won't exhibit the
problem, because they just dodge the problem by omitting the -L option
from the xml2-config output and rely on the default linker paths to do
the right thing.  But if you use a nondefault libxml2 install or a
nondefault compiler, interesting things might start to happen.

I think at this point, the issue is probably too obscure, and the people
affected by it hopefully know what they are doing, so it might not be
important in practice.  In light of the other flaws that you have
pointed out, I'd be fine with withdrawing this patch for now.  But we
should keep an eye on the situation.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] machine-parseable object descriptions

2013-03-20 Thread Alvaro Herrera
Tom Lane wrote:
 Alvaro Herrera alvhe...@2ndquadrant.com writes:
  One change I made was to move all the new code from dependency.c into
  objectaddress.c.  The only reason it was in dependency.c was that
  getObjectDescription was there in the first place; but it doesn't seem
  to me that it really belongs there.  (Back when it was first created,
  there was no objectaddress.c at all, and dependency.c was the only user
  of it.)  If there were no backpatching considerations, I would suggest
  we move getObjectDescription() to objectaddress.c as well, but I'm not
  sure it's worth the trouble, but I'm not wedded to that if somebody
  thinks both things should be kept together.
 
 +1 for moving getObjectDescription to objectaddress.c.  As you say,
 that's probably where it would've been if that file had existed at
 the time.  I don't recall that we've had to back-patch many changes
 in that function, so I don't think that concern is major.

Okay, I have pushed it with that change.  Thanks for the quick feedback.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sql_drop Event Triggerg

2013-03-20 Thread Alvaro Herrera
Here's a new version of this patch, rebased on top of the new
pg_identify_object() stuff.  Note that the regression test doesn't work
yet, because I didn't adjust to the new identity output definition (the
docs need work, too).  But that's a simple change to do.  I'm leaving
that for later.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services
*** a/doc/src/sgml/event-trigger.sgml
--- b/doc/src/sgml/event-trigger.sgml
***
*** 36,43 
   The literalddl_command_start/ event occurs just before the
   execution of a literalCREATE/, literalALTER/, or literalDROP/
   command.  As an exception, however, this event does not occur for
!  DDL commands targeting shared objects - databases, roles, and tablespaces
!  - or for command targeting event triggers themselves.  The event trigger
   mechanism does not support these object types.
   literalddl_command_start/ also occurs just before the execution of a
   literalSELECT INTO/literal command, since this is equivalent to
--- 36,43 
   The literalddl_command_start/ event occurs just before the
   execution of a literalCREATE/, literalALTER/, or literalDROP/
   command.  As an exception, however, this event does not occur for
!  DDL commands targeting shared objects mdash; databases, roles, and tablespaces
!  mdash; or for command targeting event triggers themselves.  The event trigger
   mechanism does not support these object types.
   literalddl_command_start/ also occurs just before the execution of a
   literalSELECT INTO/literal command, since this is equivalent to
***
*** 46,51 
--- 46,61 
 /para
  
 para
+ To list all objects that have been deleted as part of executing a
+ command, use the set returning
+ function literalpg_event_trigger_dropped_objects()/ from
+ your literalddl_command_end/ event trigger code (see
+ xref linkend=functions-event-triggers). Note that
+ the trigger is executed after the objects have been deleted from the
+ system catalogs, so it's not possible to look them up anymore.
+/para
+ 
+para
   Event triggers (like other functions) cannot be executed in an aborted
   transaction.  Thus, if a DDL command fails with an error, any associated
   literalddl_command_end/ triggers will not be executed.  Conversely,
***
*** 433,438 
--- 443,453 
  entry align=centerliteralX/literal/entry
 /row
 row
+ entry align=leftliteralDROP OWNED/literal/entry
+ entry align=centerliteralX/literal/entry
+ entry align=centerliteralX/literal/entry
+/row
+row
  entry align=leftliteralDROP RULE/literal/entry
  entry align=centerliteralX/literal/entry
  entry align=centerliteralX/literal/entry
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
***
*** 15980,15983  FOR EACH ROW EXECUTE PROCEDURE suppress_redundant_updates_trigger();
--- 15980,16033 
  xref linkend=SQL-CREATETRIGGER.
  /para
/sect1
+ 
+   sect1 id=functions-event-triggers
+titleEvent Trigger Functions/title
+ 
+indexterm
+  primarypg_event_trigger_dropped_objects/primary
+/indexterm
+ 
+para
+ Currently productnamePostgreSQL/ provides one built-in event trigger
+ helper function, functionpg_event_trigger_dropped_objects/, which
+ lists all object dropped by the command in whose literalddl_command_end/
+ event it is called.  If the function is run in a context other than a
+ literalddl_command_end/ event trigger function, or if it's run in the
+ literalddl_command_end/ event of a command that does not drop objects,
+ it will return the empty set.
+ /para
+ 
+para
+ The functionpg_event_trigger_dropped_objects/ function can be used
+ in an event trigger like this:
+ programlisting
+ CREATE FUNCTION test_event_trigger_for_drops()
+ RETURNS event_trigger LANGUAGE plpgsql AS $$
+ DECLARE
+ obj record;
+ BEGIN
+ FOR obj IN SELECT * FROM pg_event_trigger_dropped_objects()
+ LOOP
+ RAISE NOTICE '% dropped object: % %.% %',
+  tg_tag,
+  obj.object_type,
+  obj.schema_name,
+  obj.object_name,
+  obj.subobject_name;
+ END LOOP;
+ END
+ $$;
+ CREATE EVENT TRIGGER test_event_trigger_for_drops
+ON ddl_command_end
+EXECUTE PROCEDURE test_event_trigger_for_drops();
+ /programlisting
+ /para
+ 
+  para
+For more information about event triggers,
+see xref linkend=event-triggers.
+ /para
+   /sect1
+ 
  /chapter
*** a/src/backend/catalog/dependency.c
--- b/src/backend/catalog/dependency.c
***
*** 257,262  performDeletion(const ObjectAddress *object,
--- 257,268 
  	{
  		ObjectAddress *thisobj = 

Re: [HACKERS] pg_upgrade segfaults when given an invalid PGSERVICE value

2013-03-20 Thread Steve Singer

On 13-03-20 02:17 PM, Bruce Momjian wrote:

On Wed, Mar 20, 2013 at 01:30:20PM -0400, Tom Lane wrote:




While this surely isn't the nicest answer, it doesn't seem totally
unreasonable to me.  A bad service name indeed does not contribute
anything to the set of defaults available.


I think the concern is that the services file could easily change the
defaults that are used for connecting, though you could argue that the
real defaults for a bad service entry are properly returned.


Yes, my concern is that if I have a typo in the value of PGSERVICE I 
don't want to end up getting connected a connection to localhost instead 
of an error.


From a end-user expectations point of view I am okay with somehow 
marking the structure returned by PQconndefaults in a way that the 
connect calls will later fail.







--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Should commit_delay be PGC_SIGHUP?

2013-03-20 Thread Peter Geoghegan
I realize that this isn't terribly critical, but I'd like to suggest
that commit_delay be made PGC_SIGHUP on 9.3 (it's currently
PGC_USERSET). It's not that a poorly chosen commit_delay setting has
the potential to adversely affect other backends where the setting
*has* been set in those other backends in a suitable way - the same
thing can surely be said for work_mem. It just seems to me that
commit_delay is now something that's intended to work at the cluster
granularity, and as such it seems like almost a misrepresentation to
make it PGC_USERSET.

The fact is that whichever backend happens to end up becoming the
group commit leader from one XLogFlush() call to the next is, for all
practical purposes, unpredictable. You cannot reasonably hope to avoid
a delay within an important transaction that needs to prioritize
keeping its own latency low over total cluster throughput. If you set
commit_delay to 0 in your important transaction with this is mind,
your chances of becoming the group commit leader and avoiding the
delay are slim to almost none. Much more often than not, the important
transaction will end up becoming a group commit follower, and it'll
still spend a significant fraction of commit_delay (about 1/2, on
average) blocking on LWLockAcquireOrWait().

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade segfaults when given an invalid PGSERVICE value

2013-03-20 Thread Tom Lane
Steve Singer ssin...@ca.afilias.info writes:
 On 13-03-20 02:17 PM, Bruce Momjian wrote:
 I think the concern is that the services file could easily change the
 defaults that are used for connecting, though you could argue that the
 real defaults for a bad service entry are properly returned.

 Yes, my concern is that if I have a typo in the value of PGSERVICE I 
 don't want to end up getting connected a connection to localhost instead 
 of an error.

  From a end-user expectations point of view I am okay with somehow 
 marking the structure returned by PQconndefaults in a way that the 
 connect calls will later fail.

Unless the program changes the value of PGSERVICE, surely all subsequent
connection attempts will fail for the same reason, regardless of what
PQconndefaults returns?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-advocacy] Call for Google Summer of Code mentors, admins

2013-03-20 Thread Dimitri Fontaine
Atri Sharma atri.j...@gmail.com writes:
 We can use a scheduling algorithm, and can define a pool of tasks as well as
 a time constraint for the amount of time which can be used for running the
 tasks.Then, a scheduling algorithm can pick tasks from the pool based on
 priorities and the time duration of a task.I can see a dynamic programming
 solution to this problem.

I think mcron already implements it all and is made to be embedded into
a larger program.

  http://www.gnu.org/software/mcron/

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Let's invent a function to report lock-wait-blocking PIDs

2013-03-20 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 I propose that we should add a backend function that simplifies this
 type of query.  The API that comes to mind is (name subject to
 bikeshedding)
 
 pg_blocking_pids(pid int) returns int[]

+1

 If we want a global view of the who-blocks-whom situation, I think we'll
 need another approach.  But since this way solves isolationtester's
 problem fairly neatly, I was hopeful that it would be useful for other
 apps too.

What about a function

  pg_is_lock_exclusive(lock, lock) returns boolean
  pg_is_lock_exclusive(lock[], lock[]) returns boolean

I suppose that the lock type would be text ('ExclusiveLock'), but we
could also expose a new ENUM type for that (pg_lock_mode). If we do
that, we can also provide operators such as the following… I did try to
search for some existing ones but failed to do so.

  pg_lock_mode  pg_lock_mode
  pg_lock_mode | pg_lock_mode

Equiped with that, it should be possible to come up with a recursive
query on pg_locks that displays the whole graph, and we should then
provide as one of our system views.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Materialized view assertion failure in HEAD

2013-03-20 Thread Kevin Grittner
Kevin Grittner kgri...@ymail.com wrote:
 Robert Haas robertmh...@gmail.com wrote:

 It seems to me that the right place to fix this is in
 interpretOidsOption(), by returning false rather than
 default_with_oids whenever the relation is a materialized view.

 I like it.

In working up a patch for this approach, I see that if CREATE
FOREIGN TABLE is executed with default_with_oids set to true, it
adds an oid column which appears to be always zero in my tests so
far (although maybe other FDWs support it?).  Do we want to leave
that alone?  If we're going to add code to ignore that setting for
matviews do we also want to ignore it for FDWs?

[ thinks... ]

I suppose I should post a patch which preserves the status quo for
FDWs and treat that as a separate issue.  So, rough cut attached.
Obviously some docs should be added around this, and I still need
to do another pass to make sure I didn't miss anything; but it
passes make world-check, make installworld-check, and the
regression database can be dumped and loaded without problem.

Comments?

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company*** a/src/backend/commands/createas.c
--- b/src/backend/commands/createas.c
***
*** 218,228  GetIntoRelEFlags(IntoClause *intoClause)
  	 * because it doesn't have enough information to do so itself (since we
  	 * can't build the target relation until after ExecutorStart).
  	 */
! 	if (interpretOidsOption(intoClause-options))
  		flags = EXEC_FLAG_WITH_OIDS;
  	else
  		flags = EXEC_FLAG_WITHOUT_OIDS;
  
  	if (intoClause-skipData)
  		flags |= EXEC_FLAG_WITH_NO_DATA;
  
--- 218,232 
  	 * because it doesn't have enough information to do so itself (since we
  	 * can't build the target relation until after ExecutorStart).
  	 */
! 	if (interpretOidsOption(intoClause-options, intoClause-relkind))
  		flags = EXEC_FLAG_WITH_OIDS;
  	else
  		flags = EXEC_FLAG_WITHOUT_OIDS;
  
+ 	Assert(intoClause-relkind != RELKIND_MATVIEW ||
+ 		   (flags  (EXEC_FLAG_WITH_OIDS | EXEC_FLAG_WITHOUT_OIDS)) ==
+ 		   EXEC_FLAG_WITHOUT_OIDS);
+ 
  	if (intoClause-skipData)
  		flags |= EXEC_FLAG_WITH_NO_DATA;
  
*** a/src/backend/commands/tablecmds.c
--- b/src/backend/commands/tablecmds.c
***
*** 559,565  DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId)
  	 */
  	descriptor = BuildDescForRelation(schema);
  
! 	localHasOids = interpretOidsOption(stmt-options);
  	descriptor-tdhasoid = (localHasOids || parentOidCount  0);
  
  	/*
--- 559,565 
  	 */
  	descriptor = BuildDescForRelation(schema);
  
! 	localHasOids = interpretOidsOption(stmt-options, relkind);
  	descriptor-tdhasoid = (localHasOids || parentOidCount  0);
  
  	/*
*** a/src/backend/parser/parse_clause.c
--- b/src/backend/parser/parse_clause.c
***
*** 243,251  interpretInhOption(InhOption inhOpt)
   * table/result set should be created with OIDs. This needs to be done after
   * parsing the query string because the return value can depend upon the
   * default_with_oids GUC var.
   */
  bool
! interpretOidsOption(List *defList)
  {
  	ListCell   *cell;
  
--- 243,254 
   * table/result set should be created with OIDs. This needs to be done after
   * parsing the query string because the return value can depend upon the
   * default_with_oids GUC var.
+  *
+  * Materialized views are handled here rather than reloptions.c because that
+  * code explicitly punts checking for oids to here.
   */
  bool
! interpretOidsOption(List *defList, char relkind)
  {
  	ListCell   *cell;
  
***
*** 256,264  interpretOidsOption(List *defList)
--- 259,277 
  
  		if (def-defnamespace == NULL 
  			pg_strcasecmp(def-defname, oids) == 0)
+ 		{
+ 			if (relkind == RELKIND_MATVIEW)
+ ereport(ERROR,
+ 		(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ 		 errmsg(unrecognized parameter \oids\)));
+ 
  			return defGetBoolean(def);
+ 		}
  	}
  
+ 	if (relkind == RELKIND_MATVIEW)
+ 		return false;
+ 
  	/* OIDS option was not specified, so use default. */
  	return default_with_oids;
  }
*** a/src/backend/parser/parse_utilcmd.c
--- b/src/backend/parser/parse_utilcmd.c
***
*** 199,209  transformCreateStmt(CreateStmt *stmt, const char *queryString)
--- 199,212 
  	{
  		cxt.stmtType = CREATE FOREIGN TABLE;
  		cxt.isforeign = true;
+ 		cxt.hasoids = interpretOidsOption(stmt-options,
+ 		  RELKIND_FOREIGN_TABLE);
  	}
  	else
  	{
  		cxt.stmtType = CREATE TABLE;
  		cxt.isforeign = false;
+ 		cxt.hasoids = interpretOidsOption(stmt-options, RELKIND_RELATION);
  	}
  	cxt.relation = stmt-relation;
  	cxt.rel = NULL;
***
*** 217,223  transformCreateStmt(CreateStmt *stmt, const char *queryString)
  	cxt.blist = NIL;
  	cxt.alist = NIL;
  	cxt.pkey = NULL;
- 	cxt.hasoids = interpretOidsOption(stmt-options);
  
  	Assert(!stmt-ofTypename || !stmt-inhRelations);	/* grammar enforces */
  
--- 220,225 
*** 

Re: [HACKERS] Let's invent a function to report lock-wait-blocking PIDs

2013-03-20 Thread Tom Lane
Dimitri Fontaine dimi...@2ndquadrant.fr writes:
 Tom Lane t...@sss.pgh.pa.us writes:
 If we want a global view of the who-blocks-whom situation, I think we'll
 need another approach.  But since this way solves isolationtester's
 problem fairly neatly, I was hopeful that it would be useful for other
 apps too.

 What about a function

   pg_is_lock_exclusive(lock, lock) returns boolean
   pg_is_lock_exclusive(lock[], lock[]) returns boolean

 I suppose that the lock type would be text ('ExclusiveLock'), but we
 could also expose a new ENUM type for that (pg_lock_mode).

I don't have an objection to providing such a function, but it doesn't
do anything for the problem beyond allowing getting rid of the hairy
case expression.  That's a good thing to do of course --- but what about
the indirect-blockage issue?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Let's invent a function to report lock-wait-blocking PIDs

2013-03-20 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
   pg_is_lock_exclusive(lock, lock) returns boolean
   pg_is_lock_exclusive(lock[], lock[]) returns boolean

 I suppose that the lock type would be text ('ExclusiveLock'), but we
 could also expose a new ENUM type for that (pg_lock_mode).

 I don't have an objection to providing such a function, but it doesn't
 do anything for the problem beyond allowing getting rid of the hairy
 case expression.  That's a good thing to do of course --- but what about
 the indirect-blockage issue?

It's too late for my brain to build the full answer, the idea is that we
have another way to build the dependency cycles in the pg_locks query
and then we can aggregate locks at each level and see about conflicts
once we accumulated the data.

Is that even possible? E_GOTOSLEEP.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[PATCH] Exorcise zero-dimensional arrays (Was: Re: [HACKERS] Should array_length() Return NULL)

2013-03-20 Thread Brendan Jurd
On 17 March 2013 05:19, Tom Lane t...@sss.pgh.pa.us wrote:
 Brendan Jurd dire...@gmail.com writes:
 On 16 March 2013 09:07, Tom Lane t...@sss.pgh.pa.us wrote:
 The thing is that that syntax creates an array of zero dimensions,
 not one that has 1 dimension and zero elements.

 I'm going to ask the question that immediately comes to mind: Is there
 anything good at all about being able to define a zero-dimensional
 array?

 Perhaps not.  I think for most uses, a 1-D zero-length array would be
 just as good.  I guess what I'd want to know is whether we also need
 to support higher-dimensional zero-size arrays, and if so, what does
 the I/O syntax for those look like?

 Another fly in the ointment is that if we do redefine '{}' as meaning
 something other than a zero-D array, how will we handle existing
 database entries that are zero-D arrays?


Hello hackers,

I submit a patch to rectify the weird and confusing quirk of Postgres
to use zero dimensions to signify an empty array.

Rationale:
The concept of an array without dimensions is a) incoherent, and b)
leads to astonishing results from our suite of user-facing array
functions.  array_length, array_dims, array_ndims, array_lower and
array_upper all return NULL when presented such an array.

When a user writes ARRAY[]::int[], they expect to get an empty array,
but instead we've been giving them a bizarre semi-functional
proto-array.  Not cool.

Approach:
The patch teaches postgres to regard zero as an invalid number of
dimensions (which it very much is), and allows instead for fully armed
and operational empty arrays.

The simplest form of empty array is the 1-D empty array (ARRAY[] or
'{}') but the patch also allows for empty arrays over multiple
dimensions, meaning that the final dimension has a length of zero, but
the other dimensions may have any valid length.  For example,
'{{},{},{}}' is a 2-D empty array with dimension lengths {3,0} and
dimension bounds [1:3][1:0].

An empty array dimension may have any valid lower bound, but by
default the lower bound will be 1 and the upper bound will therefore
be 0.

Any zero-D arrays read in from existing database entries will be
output as 1-D empty arrays from array_recv.

There is an existing limitation where you cannot extend a multi-D
array by assigning values to subscripts or slices.  I've made no
attempt to address that limitation.

The patch improves some error reporting, particularly by adding
errdetail messages for ereports of problems parsing a curly-brace
array literal.

There is some miscellaneous code cleanup included; I moved the
hardcoded characters '{', '}', etc. into constants, unwound a
superfluous inner loop from ArrayCount, and corrected some mistakes in
code comments and error texts.  If preferred for reviewing, I can
split those changes (and/or the new errdetails) out into a separate
patch.

Incompatibility:
This patch introduces an incompatible change in the behaviour of the
aforementioned array functions -- instead of returning NULL for empty
arrays they return meaningful values.  Applications relying on the old
behaviour to test for emptiness may be disrupted.  One can
future-proof against this by changing e.g.

IF array_length(arr, 1) IS NULL  ...

to

IF coalesce(array_length(arr, 1), 0) = 0  ...

There is also a change in the way array subscript assignment works.
Previously it wasn't possible to make a distinction between assigning
to an empty array and assigning to a NULL, so in either case an
expression like arr[4] := 1 would create [4:4]={1}.  Now there is
a distinction.  If you assign to an empty array you will get {NULL,
NULL, NULL, 1}, whereas if you assign to a NULL you'll get
[4:4]={1}.

Regression tests:
The regression tests were updated to reflect behavioural changes.

Documentation:
A minor update to the prose in chapter 8.15 is included in the patch.

Issues:
Fixed-length arrays (like oidvector) are zero-based, which means that
they are rendered into string form with their dimension info shown.
So an empty oidvector now renders as [0:-1]={}, which is correct but
ugly.  I'm not delighted with this side-effect but I'm not sure what
can be done about it.  I'm open to ideas.

Diffstat:
 doc/src/sgml/array.sgml |   4 +-
 src/backend/executor/execQual.c |  77 +-
 src/backend/utils/adt/array_userfuncs.c |  23 +-
 src/backend/utils/adt/arrayfuncs.c  | 671
+--
 src/backend/utils/adt/arrayutils.c  |   4 +
 src/backend/utils/adt/xml.c |  19 +-
 src/include/c.h |   1 +
 src/include/utils/array.h   |   4 +
 src/pl/plpython/plpy_typeio.c   |   3 -
 src/test/isolation/expected/timeouts.out|  16 +-
 src/test/isolation/specs/timeouts.spec  |   8 +-
 src/test/regress/expected/arrays.out|  55 ++--
 src/test/regress/expected/create_function_3.out |   2 +-
 

Re: [PATCH] Exorcise zero-dimensional arrays (Was: Re: [HACKERS] Should array_length() Return NULL)

2013-03-20 Thread David E. Wheeler
On Mar 20, 2013, at 4:45 PM, Brendan Jurd dire...@gmail.com wrote:

 I submit a patch to rectify the weird and confusing quirk of Postgres
 to use zero dimensions to signify an empty array.

Epic. Thank you. I’m very glad now that I complained about this (again)!

Best,

David



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Let's invent a function to report lock-wait-blocking PIDs

2013-03-20 Thread Simon Riggs
On 20 March 2013 18:02, Tom Lane t...@sss.pgh.pa.us wrote:
 The API that comes to mind is (name subject to
 bikeshedding)

 pg_blocking_pids(pid int) returns int[]


Useful. Can we also have an SRF rather than an array?

Does the definition as an array imply anything about our ability to
join an SRF to an array?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Should commit_delay be PGC_SIGHUP?

2013-03-20 Thread Simon Riggs
On 20 March 2013 21:50, Peter Geoghegan p...@heroku.com wrote:
 I realize that this isn't terribly critical, but I'd like to suggest
 that commit_delay be made PGC_SIGHUP on 9.3 (it's currently
 PGC_USERSET). It's not that a poorly chosen commit_delay setting has
 the potential to adversely affect other backends where the setting
 *has* been set in those other backends in a suitable way - the same
 thing can surely be said for work_mem. It just seems to me that
 commit_delay is now something that's intended to work at the cluster
 granularity, and as such it seems like almost a misrepresentation to
 make it PGC_USERSET.

 The fact is that whichever backend happens to end up becoming the
 group commit leader from one XLogFlush() call to the next is, for all
 practical purposes, unpredictable. You cannot reasonably hope to avoid
 a delay within an important transaction that needs to prioritize
 keeping its own latency low over total cluster throughput. If you set
 commit_delay to 0 in your important transaction with this is mind,
 your chances of becoming the group commit leader and avoiding the
 delay are slim to almost none. Much more often than not, the important
 transaction will end up becoming a group commit follower, and it'll
 still spend a significant fraction of commit_delay (about 1/2, on
 average) blocking on LWLockAcquireOrWait().

+1

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] find libxml2 using pkg-config

2013-03-20 Thread Noah Misch
On Wed, Mar 20, 2013 at 05:17:11PM -0400, Peter Eisentraut wrote:
 On 3/4/13 1:36 PM, Noah Misch wrote:
  Do you have in mind a target system exhibiting a problem?  CentOS 6 ships a
  single xml2-config, but its --cflags --libs output is the same regardless of
  the installed combination of libxml2-dev packages.  Ubuntu 13.04 does not 
  ship
  32-bit libxml2, so it avoids the question.
 
 It does, because you can just install the libxml2 package from the
 32-bit distribution.  (So there will no longer be packages in the 64-bit
 distribution that actually contain 32-bit code, at least in the long run.)

Ah, interesting.  Is there a plan or existing provision for arbitrating the
resulting /usr/bin contents when mixing packages that way?

 But pack to the main question:  Stock systems probably won't exhibit the
 problem, because they just dodge the problem by omitting the -L option
 from the xml2-config output and rely on the default linker paths to do
 the right thing.  But if you use a nondefault libxml2 install or a
 nondefault compiler, interesting things might start to happen.
 
 I think at this point, the issue is probably too obscure, and the people
 affected by it hopefully know what they are doing, so it might not be
 important in practice.  In light of the other flaws that you have
 pointed out, I'd be fine with withdrawing this patch for now.  But we
 should keep an eye on the situation.

Agreed.  Convincing a package to properly attach to its dependencies is no
fun.  I wanted to like this patch.

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Ignore invalid indexes in pg_dump

2013-03-20 Thread Michael Paquier
On Thu, Mar 21, 2013 at 12:58 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 I had been on the fence about what to do here, but I find Josh's
 arguments persuasive, particularly the second one.  Why shouldn't we
 consider an in-progress index to be an uncommitted DDL change?

 (Now admittedly, there won't *be* any uncommitted ordinary DDL on tables
 while pg_dump is running, because it takes AccessShareLock on all
 tables.  But there could easily be uncommitted DDL against other types
 of database objects, which pg_dump won't even see.)

+1. Playing it safe is a better thing to do for sure, especially if a
restore would
fail. I didn't think about that first...

On top of checking indisvalid, I think that some additional checks on
indislive
and indisready are also necessary. As indisready has been introduced in 8.3
and
indislive has been added in 9.3, the attached patch is good I think.
I also added a note in the documentation about invalid indexes not being
dumped.
Perhaps this patch should be backpatched to previous versions in order to
have
the same consistent behavior.

Regards,
-- 
Michael


20130321_no_dump_indisvalid.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Let's invent a function to report lock-wait-blocking PIDs

2013-03-20 Thread Noah Misch
On Wed, Mar 20, 2013 at 02:02:32PM -0400, Tom Lane wrote:
[fun query for appraising lock contention]

 This is way more knowledge than we (should) want a client to embed about
 which lock types block which others.  What's worse, it's still wrong.
 The query will find cases where one of the test sessions *directly*
 blocks another one, but not cases where the blockage is indirect.
 For example, consider that A holds AccessShareLock, B is waiting for
 AccessExclusiveLock on the same object, and C is queued up behind B
 for another AccessShareLock.  This query will not think that C is
 blocked, not even if B is part of the set of sessions of interest
 (because B will show the lock as not granted); but especially so if
 B is not part of the set.
 
 I think that such situations may not arise in the specific context that
 isolationtester says it's worried about, which is to disregard waits for
 locks held by autovacuum.  But in general, you can't reliably tell who's
 blocking whom with a query like this.

Indeed, isolationtester only uses the lock wait query when all but one session
is idle (typically idle-in-transaction).  But a more-general implementation of
the isolationtester concept would need the broader comprehension you describe.

 If isolationtester were the only market for this type of information,
 maybe it wouldn't be worth worrying about.  But I'm pretty sure that
 there are a *lot* of monitoring applications out there that are trying
 to extract who-blocks-whom information from pg_locks.

Agreed; such a feature would carry its own weight.  Unless the cost to
implement it is similar to the overall cost of just making the affected
timeout values high enough, I do think it's best delayed until 9.4.

 I propose that we should add a backend function that simplifies this
 type of query.  The API that comes to mind is (name subject to
 bikeshedding)
 
   pg_blocking_pids(pid int) returns int[]
 
 defined to return NULL if the argument isn't the PID of any backend or
 that backend isn't waiting for a lock, and otherwise an array of the
 PIDs of the backends that are blocking it from getting the lock.
 I would compute the array as
 
   PIDs of backends already holding conflicting locks,
   plus PIDs of backends requesting conflicting locks that are
   ahead of this one in the lock's wait queue,
   plus PIDs of backends that block the latter group of PIDs
   (ie, are holding locks conflicting with their requests,
   or are awaiting such locks and are ahead of them in the queue)
 
 There would be some cases where this definition would be too expansive,
 ie we'd release the waiter after only some of the listed sessions had
 released their lock or request.  (That could happen for instance if we
 concluded we had to move up the waiter's request to escape a deadlock.)
 But I think that it's better to err in that direction than to
 underestimate the set of relevant PIDs.

That definition seems compatible with, albeit overkill for, the needs of
isolationtester.  However, I have an inkling that we should expose those
categories.  Perhaps one of these interfaces?

pg_blocking_pids(pid int, OUT blocker int, OUT waiting bool, OUT direct 
bool) returns setof record
pg_blocking_pids(pid int, OUT blocker int, OUT how text) returns setof 
record

Thanks,
nm

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Let's invent a function to report lock-wait-blocking PIDs

2013-03-20 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 On 20 March 2013 18:02, Tom Lane t...@sss.pgh.pa.us wrote:
 The API that comes to mind is (name subject to
 bikeshedding)
 
 pg_blocking_pids(pid int) returns int[]

 Useful. Can we also have an SRF rather than an array?

I thought about that, but at least for the isolationtester use-case,
the array result is clearly easier to use.  You can get from one to the
other with unnest() or array_agg(), so I don't really feel a need to
provide both.  Can you generate use-cases where the set-result approach
is superior?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-advocacy] Call for Google Summer of Code mentors, admins

2013-03-20 Thread Magnus Hagander
On Mar 20, 2013 11:14 PM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:

 Atri Sharma atri.j...@gmail.com writes:
  We can use a scheduling algorithm, and can define a pool of tasks as
well as
  a time constraint for the amount of time which can be used for running
the
  tasks.Then, a scheduling algorithm can pick tasks from the pool based on
  priorities and the time duration of a task.I can see a dynamic
programming
  solution to this problem.

 I think mcron already implements it all and is made to be embedded into
 a larger program.


As long as your larger program is gpl. Not even lgpl on that one. I'd think
that's a killer for that idea...

/Magnus


Re: [HACKERS] find libxml2 using pkg-config

2013-03-20 Thread Tom Lane
Noah Misch n...@leadboat.com writes:
 On Wed, Mar 20, 2013 at 05:17:11PM -0400, Peter Eisentraut wrote:
 On 3/4/13 1:36 PM, Noah Misch wrote:
 Do you have in mind a target system exhibiting a problem?  CentOS 6 ships a
 single xml2-config, but its --cflags --libs output is the same regardless of
 the installed combination of libxml2-dev packages.  Ubuntu 13.04 does not 
 ship
 32-bit libxml2, so it avoids the question.

 It does, because you can just install the libxml2 package from the
 32-bit distribution.  (So there will no longer be packages in the 64-bit
 distribution that actually contain 32-bit code, at least in the long run.)

 Ah, interesting.  Is there a plan or existing provision for arbitrating the
 resulting /usr/bin contents when mixing packages that way?

There is not.  With my other hat on (the red fedora), this annoys me
tremendously.  I believe the rationale is that users shouldn't really
care whether /usr/bin/foo is a 32-bit or 64-bit executable as long as it
gets the job done ... but that's a pretty thin rationale IMHO.  In any
case, the existing design for multilib only takes care to allow parallel
installation of 32-bit and 64-bit *libraries*, not executables.  For the
latter, I think what happens is you get the executables from whichever
package you installed last.  At least on Red Hat-based systems.
Possibly other distros have a saner design here.

 I think at this point, the issue is probably too obscure, and the people
 affected by it hopefully know what they are doing, so it might not be
 important in practice.  In light of the other flaws that you have
 pointed out, I'd be fine with withdrawing this patch for now.  But we
 should keep an eye on the situation.

 Agreed.  Convincing a package to properly attach to its dependencies is no
 fun.  I wanted to like this patch.

In the end, I think this is mostly the territory of packagers.  We can't
force the right result as a platform-agnostic upstream source, and I'm
not sure we should try.  I would suggest that any changes in this area
be driven by actual complaints from actual packagers.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Single-argument variant for array_length and friends?

2013-03-20 Thread Tom Lane
Brendan Jurd dire...@gmail.com writes:
 While I was working on my empty array patch I was frequently irritated
 by the absence of an array_length(anyarray).  The same goes for
 array_upper and array_lower.  Most of the time when I work with
 arrays, they are 1-D, and it's inelegant to having to specify which
 dimension I mean when there is only one to choose from.

 The question I have (and would appreciate your input on) is how such
 single-argument variants should behave when operating on an array with
 multiple dimensions?

I'm not entirely convinced that this is a good idea, but if we're going
to allow it I would argue that array_length(a) should be defined as
array_length(a, 1).  The other possibilities are too complicated to
explain in as few words.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Trust intermediate CA for client certificates

2013-03-20 Thread Craig Ringer

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/19/2013 09:46 PM, Stephen Frost wrote:
 * Craig Ringer (cr...@2ndquadrant.com) wrote:
 As far as I'm concerned that's the immediate problem fixed. It may be
 worth adding a warning on startup if we find non-self-signed certs in
 root.crt too, something like 'WARNING: Intermediate certificate found in
 root.crt. This does not do what you expect and your configuration may be
 insecure; see the Client Certificates chapter in the documentation.'

 I'm not sure that I follow this logic, unless you're proposing that
 intermediate CAs only be allowed to be picked up from system-wide
 configuration? That strikes me as overly constrained as I imagine there
 are valid configurations today which have intermediate CAs listed, with
 the intention that they be available for PG to build the chain from a
 client cert that is presented back up to the root. Now, the client
 might be able to provide such an intermediate CA cert too (one of the
 fun things about SSL is that the client can send any 'missing' certs to
 the server, if it has them available..), but it also might not.


Drat, you're quite right. I've always included the full certificate
chain in client certs but it's in no way required.

I guess that pretty much means mainaining the status quo and documenting
it better.

- -- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJRSp3fAAoJELBXNkqjr+S2+JYH+wUo2mCMB2n3/mXo24l0rO5+
mxS6d9uJNIZZErZX2I/NfY59kLX1ypUAeGhQnCSOZuxig6Xd91nXzRdkaQF/+WHa
9hEAXbOtl7bMgj8cEIfloQlSU94VXamH53i5YL5ZVLqkQG/7uknY05NbJs3IGM5g
ALrEgo3XOC8JyUz21hZzaQOb2vbdSh0F0O17EoJz1fLY6l5ScFnLWihKYurp5Oq0
em1bsN0GKckmSa7a9mJ37Hvowi92epbtF4XR1DyrQGOHQSCLq0NnCthA5MtdPXN0
+BJQWZfx0qcRcrHMILkFa0Uu7Bc9Ao0q06l55DNSyYXx1FWN0cBArGpXcoPb8Zs=
=BAYd
-END PGP SIGNATURE-



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers