Re: [HACKERS] Escaping ":" in .pgpass - code or docs bug?

2011-12-17 Thread Ross Reedstrom
On Fri, Dec 16, 2011 at 02:55:09PM +, Richard Huxton wrote:
> According to the docs [1], you should escape embedded colons in
> .pgpass (fair enough). Below is PG 9.1.1
> 
> user = "te:st", db = "te:st", password = "te:st"
> 
> $ cat ~/.pgpass
> *:*:te:st:te:st:te:st
> $ psql91 -U "te:st" -d "te:st"
> te:st=>
> 
> $ cat ~/.pgpass
> *:*:te\:st:te\:st:te:st
> $ psql91 -U "te:st" -d "te:st"
> te:st=>
> 
> $ cat ~/.pgpass
> *:*:te\:st:te\:st:te\:st
> $ psql91 -U "te:st" -d "te:st"
> psql: FATAL:  password authentication failed for user "te:st"
> password retrieved from file "/home/richardh/.pgpass"
> 
> I'm a bit puzzled how it manages without the escaping in the first
> case. There's a lack of consistency though that either needs
> documenting or fixing.

Hmm, seems the code in fe-connect.c that reads the password out of .pgpass does 
this:

if ((t = pwdfMatchesString(t, hostname)) == NULL ||
(t = pwdfMatchesString(t, port)) == NULL ||
(t = pwdfMatchesString(t, dbname)) == NULL ||
(t = pwdfMatchesString(t, username)) == NULL)
 [...]

pwdfMatchesString 'eats' the stringbuffer until the next unmatched character or
unescaped colon.  If it falls out the bottom of that, the rest of the line is
returned as the candidate password.

Since the code that does the backslash detection is in pwdfMatchesString(), and
the password never goes through that function, the escapes are not cleaned up.

This should either be fixed by changing the documentation to say to not escape
colons or backslashes in the password part, only, or modify this function
(PasswordFromFile) to silently unescape the password string. It already copies
it. 

Perhaps something like (WARNING! untested code, rusty C skills):


*** src/interfaces/libpq/fe-connect.c.orig  2011-12-16 17:44:29.265913914 
-0600
--- src/interfaces/libpq/fe-connect.c   2011-12-16 17:46:46.137913871 -0600
***
*** 4920,4925 
--- 4920,4933 
continue;
ret = strdup(t);
fclose(fp);
+ 
+ /* unescape any residual escaped colons */
+ t = ret;
+ while (t[0]) {
+ if (t[0] == '\\' && (t[1] == ':' || t[1] == '\\'))
+ strcpy(t,t+1);
+ t++;
+ 
return ret;
}
  
This would be backward compatible, in that existing working passwords would
continue to work, unless they happen to contain exactly the string '\:' or
'\\', then they'd need to double the backslash.

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer & Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE

> 
> 
> [1] http://www.postgresql.org/docs/9.1/static/libpq-pgpass.html
> 
> -- 
>   Richard Huxton
>   Archonet Ltd
> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
> 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Non-inheritable check constraints

2011-12-17 Thread Nikhil Sontakke
Hi Alvaro,

The patch looks ok to me. I see that we now sort the constraints by
conisonly value too:

@@ -1781,12 +1781,20 @@ describeOneTableDetails(const char *schemaname,
 /* print table (and column) check constraints */
 if (tableinfo.checks)
 {
+char *is_only;
+
+if (pset.sversion > 90100)
+is_only = "r.conisonly";
+else
+is_only = "false AS conisonly";
+
 printfPQExpBuffer(&buf,
-  "SELECT r.conname, "
+  "SELECT r.conname, %s, "
   "pg_catalog.pg_get_constraintdef(r.oid,
true)\n"
   "FROM pg_catalog.pg_constraint r\n"
-   "WHERE r.conrelid = '%s' AND r.contype = 'c'\nORDER BY
1;",
-  oid);
+   "WHERE r.conrelid = '%s' AND r.contype = 'c'\n"
+ "ORDER BY 2, 1;",
+  is_only, oid);
 result = PSQLexec(buf.data, false);
 if (!result)
 goto error_return;

My preference would be to see true values first, so might want to modify it
to

"ORDER BY 2 desc, 1"

to have that effect. Your call finally.

Regards,
Nikhils

On Sat, Dec 17, 2011 at 12:36 AM, Alvaro Herrera  wrote:

> Excerpts from Greg Smith's message of vie dic 16 15:02:20 -0300 2011:
> > On 12/04/2011 02:22 AM, Nikhil Sontakke wrote:
> > >
> > > Is it okay to modify an existing constraint to mark it as "only",
> even
> > > if it was originally inheritable?  This is not clear to me.  Maybe
> the
> > > safest course of action is to raise an error.  Or maybe I'm
> misreading
> > > what it does (because I haven't compiled it yet).
> > >
> > >
> > > Hmmm, good question. IIRC, the patch will pass is_only as true only if
> > > it going to be a locally defined, non-inheritable constraint. So I
> > > went by the logic that since it was ok to merge and mark a constraint
> > > as locally defined, it should be ok to mark it non-inheritable from
> > > this moment on with that new local definition?
>
> I think I misread what that was trying to do.  I thought it would turn
> on the "is only" bit on a constraint that a child had inherited from a
> previous parent, but that was obviously wrong now that I think about it
> again.
>
> > With this open question, this looks like it's back in Alvaro's hands
> > again to me.  This one started the CF as "Ready for Committer" and seems
> > stalled there for now.  I'm not going to touch its status, just pointing
> > this fact out.
>
> Yeah.  Nikhil, Alex, this is the merged patch.  Have a look that it
> still works for you (particularly the pg_dump bits) and I'll commit it.
> I adjusted the regression test a bit too.
>
> Thanks.
>
> --
> Álvaro Herrera 
> The PostgreSQL Company - Command Prompt, Inc.
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


[HACKERS] REMINDER: FOSDEM 2012 - PostgreSQL Devroom: Call for Speakers

2011-12-17 Thread Andreas 'ads' Scherbaum

Am 20.11.2011 23:54, schrieb Andreas 'ads' Scherbaum:


FOSDEM 2012 - PostgreSQL Devroom: Call for Speakers


The PostgreSQL project will have a Devroom at FOSDEM 2012, which takes
place on February 4-5 in Brussels, Belgium. The Devroom will mainly
cover topics for PostgreSQL users, developers and contributors. For more
information about the event itself, please see the website at
http://www.fosdem.org/2012/ .


We are now accepting proposals for talks. Please note that we only
accept talks in English.

Each session will last 45 minutes (including discussion), and may be on
any topic related to PostgreSQL. Suggested topic areas include:

* Developing applications for PostgreSQL
* Administering large scale PostgreSQL installations
* Case studies and/or success stories of PostgreSQL deployments
* PostgreSQL tools and utilities
* PostgreSQL hacking
* Community & user groups
* Tuning the server
* Migrating from other systems
* Scaling/replication
* Benchmarking & hardware
* PostgreSQL related products

Of course, we're happy to receive proposals for talks on other
PostgreSQL related topics as well.


Please use our conference website to submit your proposal:

https://www.postgresql.eu/events/callforpapers/fosdem2012/


The deadline for submissions is December 20th, 2011. The schedule will
be published and speakers will be informed by the end of the year.


Please also note my email about hotel reservation:

http://archives.postgresql.org/pgeu-general/2011-11/msg0.php



Please keep in mind, that the Call for Speakers is open until December 
20th. Only a few days left.

Now it's a good time to submit your proposal ;-)


--
Andreas 'ads' Scherbaum
German PostgreSQL User Group
European PostgreSQL User Group - Board of Directors
Volunteer Regional Contact, Germany - PostgreSQL Project

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] REMINDER: Hotel reservation for FOSDEM 2012 - Deadline: December 31th, 2011

2011-12-17 Thread Andreas 'ads' Scherbaum

Am 11.11.2011 16:14, schrieb Andreas 'ads' Scherbaum:


like the last years we will have a devroom at FOSDEM 2012.
We also look forward to have a booth.


We made a group reservation in the Agenda Louise hotel:

Hotel Agenda Louise
rue de Florence 6
B-1000 Brussels
Tel: + 32.2.539.00.31
Fax: + 32.2.539.00.63
www.hotel-agenda.com


This time, as a good customer, we got a special price.

 From Friday to Sunday included:
- 80 EUR per night and single room
- 90 EUR per night and double room

 From Monday to Thursday included:
- 106 EUR per night and single room
- 120 EUR per night and single room

Breakfast, taxes and services are included.



If you would like to book a room, please send me an email.
Include your name, email address, room type, arrival and leave date.


Important: please send me this information until December 31th, 211!



If you need a hotel room for FOSDEM, please don't forget to send me an 
email. Deadline is end of the year. Registrations after this date can 
not be considered.



--
Andreas 'ads' Scherbaum
German PostgreSQL User Group
European PostgreSQL User Group - Board of Directors
Volunteer Regional Contact, Germany - PostgreSQL Project

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe

2011-12-17 Thread Simon Riggs
On Sat, Dec 17, 2011 at 3:22 AM, Robert Haas  wrote:

> I am also not entirely sure I believe that this is plugging all the
> failure cases.  I think that it may just be making the failure cases
> more obscure, rather than really getting rid of them.  Consider
> something like the following:
>
> T1: Update row version A, creating new row version B.
> T2: Begin scanning the catalog in question.  We happen to encounter
> row version B first.  We remember T1's XID as in progress, but don't
> see the row since T1 hasn't committed.
> T1: Rollback.
> T3: Update row version A, creating new row version C.
> T3: Commit.
> T2: Scan now comes to row version A; we don't see that version either,
> since T3 is committed.
>
> I don't think there's any guarantee that T2's scan will see tuples
> inserted after the start of the scan.  If I'm correct about that, and
> I'm pretty sure it's true for sequential scans anyway, then T2's scan
> might end without seeing C either.

A simpler example shows it better. Tom's idea prevents 2 rows being
returned, but doesn't prevent zero rows.

That's a useful improvement, but not the only thing.

ISTM we can run a scan as we do now, without locking. If scan returns
zero rows we scan again, but take a definition lock first. ALTER TABLE
holds the definition lock while running, which won't be a problem
because we would only use that on shorter AT statements.

Definition lock being the type of lock described earlier on this
thread, that we already have code for. So we have code for all the
parts we need to make this work.

So that means we'd be able to plug the gaps noted, without reducing
performance on the common code paths.

I don't see any objections made so far remain valid with that approach.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2011-12-17 Thread Simon Riggs
On Sat, Dec 17, 2011 at 2:26 AM, Robert Haas  wrote:

> In the spirit of Simon's suggestion that we JFDI, I cooked up a patch
> today that JFDI.  See attached.

Which looks very good.

Comments
* Comment for IDENTIFICATION of json.c says contrib/json/json.c
* json.c contains a duplicate of a line from header file "extern Datum
json_in(PG_FUNCTION_ARGS);"

And additionally, a quote from our fine manual...

"Caution: Some XML-related functions may not work at all on non-ASCII
data when the server encoding is not UTF-8. This is known to be an
issue for xpath() in particular."  so I think this approach works
for JSON too.

Adding tests and docs is a must, nothing else is right now. Once we
have this, others can add the bells and whistles, possibly in 9.2

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Storing hot members of PGPROC out of the band

2011-12-17 Thread Robert Haas
On Sat, Dec 17, 2011 at 1:00 AM, Jim Nasby  wrote:
> I also wonder how much this throws some previous performance tests into 
> suspicion. If it's not uncommon for performance improvement attempts to shift 
> a bottleneck to a different part of the system and marginally hurt 
> performance then we might be throwing away good performance improvement ideas 
> before we should...

I think we are (mostly) OK on this point, at least as far as the work
I've been doing.  We've actually had a few previous instances of this
phenomenon - e.g. when I first committed my fastlock patch,
performance actually got worse if you had >40 cores doing read-only
queries, because speeding up the lock manager made it possible for the
spinlock protection SInvalReadLock to mess things up royally.
Nevertheless, we got it committed - and fixed the SInvalReadLock
problem, too.  This one is/was somewhat more subtle, but I'm feeling
pretty good about our chances of making at least some further progress
in time for 9.2.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Caching for stable expressions with constant arguments v3

2011-12-17 Thread Marti Raudsepp
On Fri, Dec 16, 2011 at 21:32, Jaime Casanova  wrote:
> Actually i tried some benchmarks with the original version of the
> patch and saw some regression with normal pgbench runs, but it wasn't
> much... so i was trying to found out some queries that show benefit
> now that we have it, that will be a lot more easier

Ah, one more trick for testing this patch: if you build with
-DDEBUG_CACHEEXPR=1 then EXPLAIN VERBOSE displays cached
subexpressions between a CACHE[...] marker.

Regards,
Marti

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: CHECK FUNCTION statement

2011-12-17 Thread Pavel Stehule
2011/12/16 Albe Laurenz :
> Pavel Stehule wrote:
>> one small update - better emulation of environment for security
>> definer functions
>
> Patch applies and compiles fine, core functionality works fine.
>
> I found a little bug:
>
> In backend/commands/functioncmds.c,
> function CheckFunction(CheckFunctionStmt *stmt),
> while you perform the table scan for CHECK FUNCTION ALL,
> you use the variable funcOid to hold the OID of the current
> function in the loop.
>
> If no appropriate function is found in the loop, the
> check immediately after the table scan will not succeed
> because funcOid holds the OID of the last function scanned
> in the loop.
> As a consequence, CheckFunctionById is called for this
> function.
>
> Here is a demonstration:
> test=> CHECK FUNCTION ALL IN SCHEMA pg_catalog;
> [...]
> NOTICE:  skip check function "plpgsql_checker(oid,regclass,text[])", uses C 
> language
> NOTICE:  skip check function "plperl_call_handler()", uses C language
> NOTICE:  skip check function "plperl_inline_handler(internal)", uses C 
> language
> NOTICE:  skip check function "plperl_validator(oid)", uses C language
> NOTICE:  skip check function "plperl_validator(oid)", language "c" hasn't 
> checker function
> CHECK FUNCTION
>
> when it should be:
> test=> CHECK FUNCTION ALL IN SCHEMA pg_catalog;
> [...]
> NOTICE:  skip check function "plpgsql_checker(oid,regclass,text[])", uses C 
> language
> NOTICE:  skip check function "plperl_call_handler()", uses C language
> NOTICE:  skip check function "plperl_inline_handler(internal)", uses C 
> language
> NOTICE:  skip check function "plperl_validator(oid)", uses C language
> NOTICE:  nothing to check
> CHECK FUNCTION

I'll fix it
>
>
> Another thing struck me as odd:
>
> You have the option "fatal_errors" for the checker function, but you
> special case it in CheckFunction(CheckFunctionStmt *stmt) and turn
> errors to warnings if it is not set.
>
> Wouldn't it be better to have the checker function ereport a WARNING
> or an ERROR depending on the setting? Options should be handled by the
> checker function.

The behave that I use, is more rubust and there is only a few lines of
code more.

a) It ensure expectable behave for third party checker function -
exception in checker function doesn't break a multi statement check
function
b) It can ensure same format of error message - because it is
transformed on top level

Regards

Pavel

>
> Yours,
> Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] WIP patch: Improve relation size functions such as pg_relation_size() to avoid producing an error when called against a no longer visible relation

2011-12-17 Thread Phil Sorber
Attached is a patch that addresses the todo item "Improve relation
size functions such as pg_relation_size() to avoid producing an error
when called against a no longer visible relation."

http://archives.postgresql.org/pgsql-general/2010-10/msg00332.php

Instead of returning an error, they now return NULL if the OID is
found in pg_class when using SnapshotAny. I only applied it to 4
functions: select pg_relation_size, pg_total_relation_size,
pg_table_size and pg_indexes_size. These are the ones that were using
relation_open(). I changed them to using try_relation_open(). For
three of them I had to move the try_relation_open() call up one level
in the call stack and change the parameter types for some support
functions from Oid to Relation. They now also call a new function
relation_recently_dead() which is what checks for relation in
SnapshotAny. All regression tests passed.

Is SnapshotAny the snapshot I should be using? It seems to get the
correct results. I can drop a table and I get NULL. Then after a
vacuumdb it returns an error.
diff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c
new file mode 100644
index 2ee5966..0623935
*** a/src/backend/utils/adt/dbsize.c
--- b/src/backend/utils/adt/dbsize.c
***
*** 24,32 
--- 24,34 
  #include "storage/fd.h"
  #include "utils/acl.h"
  #include "utils/builtins.h"
+ #include "utils/fmgroids.h"
  #include "utils/rel.h"
  #include "utils/relmapper.h"
  #include "utils/syscache.h"
+ #include "utils/tqual.h"
  
  
  /* Return physical size of directory contents, or 0 if dir doesn't exist */
*** calculate_relation_size(RelFileNode *rfn
*** 281,286 
--- 283,321 
  	return totalsize;
  }
  
+ /*
+  * relation_recently_dead
+  *
+  * Check to see if a relation exists in SnapshotAny
+  */
+ static bool
+ relation_recently_dead(Oid relOid)
+ {
+ 	Relation		classRel;
+ 	ScanKeyData		key[1];
+ 	HeapScanDesc	scan;
+ 	bool			status;
+ 
+ 	classRel = heap_open(RelationRelationId, AccessShareLock);
+ 
+ 	ScanKeyInit(&key[0],
+ Anum_pg_class_relfilenode,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(relOid));
+ 
+ 	scan = heap_beginscan(classRel, SnapshotAny, 1, key);
+ 
+ 	if (heap_getnext(scan, ForwardScanDirection) != NULL)
+ 		status = true;
+ 	else
+ 		status = false;
+ 
+ 	heap_endscan(scan);
+ 	heap_close(classRel, AccessShareLock);
+ 
+ 	return status;
+ }
+ 
  Datum
  pg_relation_size(PG_FUNCTION_ARGS)
  {
*** pg_relation_size(PG_FUNCTION_ARGS)
*** 289,295 
  	Relation	rel;
  	int64		size;
  
! 	rel = relation_open(relOid, AccessShareLock);
  
  	size = calculate_relation_size(&(rel->rd_node), rel->rd_backend,
  			  forkname_to_number(text_to_cstring(forkName)));
--- 324,338 
  	Relation	rel;
  	int64		size;
  
! 	rel = try_relation_open(relOid, AccessShareLock);
! 
! 	if (rel == NULL)
! 	{
! 		if ((relOid != 0) && relation_recently_dead(relOid))
! 			PG_RETURN_NULL();
! 		else
! 			elog(ERROR, "could not open relation with OID %u", relOid);
! 	}
  
  	size = calculate_relation_size(&(rel->rd_node), rel->rd_backend,
  			  forkname_to_number(text_to_cstring(forkName)));
*** calculate_toast_table_size(Oid toastreli
*** 339,352 
   * those won't have attached toast tables, but they can have multiple forks.
   */
  static int64
! calculate_table_size(Oid relOid)
  {
  	int64		size = 0;
- 	Relation	rel;
  	ForkNumber	forkNum;
  
- 	rel = relation_open(relOid, AccessShareLock);
- 
  	/*
  	 * heap size, including FSM and VM
  	 */
--- 382,392 
   * those won't have attached toast tables, but they can have multiple forks.
   */
  static int64
! calculate_table_size(Relation rel)
  {
  	int64		size = 0;
  	ForkNumber	forkNum;
  
  	/*
  	 * heap size, including FSM and VM
  	 */
*** calculate_table_size(Oid relOid)
*** 360,367 
  	if (OidIsValid(rel->rd_rel->reltoastrelid))
  		size += calculate_toast_table_size(rel->rd_rel->reltoastrelid);
  
- 	relation_close(rel, AccessShareLock);
- 
  	return size;
  }
  
--- 400,405 
*** calculate_table_size(Oid relOid)
*** 371,382 
   * Can be applied safely to an index, but you'll just get zero.
   */
  static int64
! calculate_indexes_size(Oid relOid)
  {
  	int64		size = 0;
- 	Relation	rel;
- 
- 	rel = relation_open(relOid, AccessShareLock);
  
  	/*
  	 * Aggregate all indexes on the given relation
--- 409,417 
   * Can be applied safely to an index, but you'll just get zero.
   */
  static int64
! calculate_indexes_size(Relation rel)
  {
  	int64		size = 0;
  
  	/*
  	 * Aggregate all indexes on the given relation
*** calculate_indexes_size(Oid relOid)
*** 405,412 
  		list_free(index_oids);
  	}
  
- 	relation_close(rel, AccessShareLock);
- 
  	return size;
  }
  
--- 440,445 
*** Datum
*** 414,429 
  pg_table_size(PG_FUNCTION_ARGS)
  {
  	Oid			relOid = PG_GETARG_OID(0);
  
! 	PG_RETURN_INT64(calculate_table_size(relOid));
  

Re: [HACKERS] review: CHECK FUNCTION statement

2011-12-17 Thread Pavel Stehule
2011/12/16 Greg Smith :
> I just poked at this a bit myself to see how the patch looked.  There's just
> over 4000 lines in the diff.  Even though 1/4 of that is tests, which is
> itself encouraging, that's still a good sized feature.  The rate at which
> code here has still been changing regularly here has me nervous about
> considering this a commit candidate right now though.  It seems like it
> still needs a bit more time to have problems squeezed out still.
>
> Two ideas I was thinking about here:
>
> -If you take a step back and look at where the problem parts of the code
> have been recently, are there any new tests or assertions you might add to
> try and detect problems like that in the future?  I haven't been following
> this closely enough to have any suggestions where, and there is a lot of
> error checking aimed at logging already; maybe there's nothing new to chase
> there.

last bug was based on wrong untoasting of function parameters - and it
was hang on assertion - so it was ok

I can recheck code where I can add asserts

There is known issue - I cannot to check multi check statement in
regress tests, because results (notices, errors) can be in random
order. We don't sort functions by oid in check_functions_lookup.


>
> -Can we find some larger functions you haven't tested this against yet to
> throw at it?  It seems able to consume all the cases you've constructed for
> it; it would be nice to find some brand new ones it's never seen before to
> check.
>

I use it for checking of my most large plpgsql project - it is about
300KB plpgsql procedures - but this code is not free - and this module
helps to find lot of bugs.

I have plan to check a more functions from regress tests - but these
tests are no bugs - I checked almost all  four months ago - dynamic
sql based and temp table based cannot check.

> This has made a lot of progress and seems it will be a good commit candidate
> for the next CF.  I think it justs a bit more time than we have left in this
> CommitFest for it right now, particularly given the size of the patch.  I'm
> turning this one into "returned with feedback", but as a mediocre pl/pgsql
> author I'm hoping to see more updates still.

I'll send update early

Regards

Pavel

>
> --
> Greg Smith   2ndQuadrant US    g...@2ndquadrant.com   Baltimore, MD
> PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: CHECK FUNCTION statement

2011-12-17 Thread Pavel Stehule
Hello

>
> You have the option "fatal_errors" for the checker function, but you
> special case it in CheckFunction(CheckFunctionStmt *stmt) and turn
> errors to warnings if it is not set.
>
> Wouldn't it be better to have the checker function ereport a WARNING
> or an ERROR depending on the setting? Options should be handled by the
> checker function.
>

A would to process fatal_errors out of checker function - just it is
more robust. This flag has not too sense in plpgsql - but can have a
more sense in other languages.

But I'll think again about flags

note about warnings and errors. Warnings are useless on checker
function level, because they are just shown, but they cannot be
trapped.

maybe result based on tuplestore can be better - I have to look on it.

Regards

Pavel


> Yours,
> Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Page Checksums

2011-12-17 Thread David Fetter
Folks,

What:

Please find attached a patch for 9.2-to-be which implements page
checksums.  It changes the page format, so it's an initdb-forcing
change.

How: 
In order to ensure that the checksum actually matches the hint
bits, this makes a copy of the page, calculates the checksum, then
sends the checksum and copy to the kernel, which handles sending
it the rest of the way to persistent storage.

Why:
My employer, VMware, thinks it's a good thing, and has dedicated
engineering resources to it.  Lots of people's data is already in
cosmic ray territory, and many others' data will be soon.  And
it's a TODO :)

If this introduces new failure modes, please detail, and preferably
demonstrate, just what those new modes are.  As far as we've been able
to determine so far, it could expose on-disk corruption that wasn't
exposed before, but we see this as dealing with a previously
un-dealt-with failure rather than causing one.

Questions, comments and bug fixes are, of course, welcome.

Let the flames begin!

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0cc3296..a5c20b3 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1685,6 +1685,7 @@ SET ENABLE_SEQSCAN TO OFF;
 data corruption, after a system failure. The risks are similar to 
turning off
 fsync, though smaller, and it should be turned off
 only based on the same circumstances recommended for that parameter.
+This parameter must be on when page_cksum is on.

 

@@ -1701,6 +1702,20 @@ SET ENABLE_SEQSCAN TO OFF;
   
  
 
+ 
+  page_cksum (boolean)
+  
+   page_cksumconfiguration parameter
+   
+   
+
+ When this parameter is on, the
+ PostgreSQL server writes and
+ checks checksums for each page written to persistent storage.
+
+   
+ 
+
  
   wal_buffers (integer)
   
diff --git a/src/backend/postmaster/postmaster.c 
b/src/backend/postmaster/postmaster.c
index 963189d..0fa5f68 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -314,6 +314,9 @@ extern char *optarg;
 extern int optind,
opterr;
 
+extern int page_checksum;
+extern bool fullPageWrites;
+
 #ifdef HAVE_INT_OPTRESET
 extern int optreset;   /* might not be declared by 
system headers */
 #endif
@@ -766,6 +769,29 @@ PostmasterMain(int argc, char *argv[])
(errmsg("WAL streaming (max_wal_senders > 0) 
requires wal_level \"archive\" or \"hot_standby\"")));
 
/*
+* The idea here is that there will be checksum matches if there
+* are partial writes to pages during hardware crashes.  The user
+* should have full_page_writes enabled if page_checksum is
+* enabled so that these pages are automatically fixed, otherwise
+* PostgreSQL may get checksum errors after crashes on pages that
+* are in fact partially written and hence corrupted.  With
+* full_page_writes enabled, PostgreSQL will replace each page
+* without ever looking at the partially-written page and seeing
+* an incorrect checksum.  Hence, checksums will detect only real
+* disk corruptions, i.e. places where the disk reported a
+* successful write but the data was still corrupted at some
+* point.
+*
+* Alternatively, we may want to leave this check out.  This would
+* be for sophisticated users who have some other guarantee
+* (hardware and/or software) against ever producing a partial
+* write during crashes.
+*/
+   if (page_checksum && !fullPageWrites)
+   ereport(ERROR,
+   (errmsg("full_page_writes must be enabled if 
page_checksum is enabled.")));
+
+   /*
 * Other one-time internal sanity checks can go here, if they are fast.
 * (Put any slow processing further down, after postmaster.pid 
creation.)
 */
diff --git a/src/backend/storage/smgr/smgr.c b/src/backend/storage/smgr/smgr.c
index 4f607cd..1756d62 100644
--- a/src/backend/storage/smgr/smgr.c
+++ b/src/backend/storage/smgr/smgr.c
@@ -17,6 +17,7 @@
  */
 #include "postgres.h"
 
+#include "catalog/catalog.h"
 #include "commands/tablespace.h"
 #include "storage/bufmgr.h"
 #include "storage/ipc.h"
@@ -79,6 +80,12 @@ static const int NSmgr = lengthof(smgrsw);
  */
 static HTAB *SMgrRelationHash = NULL;
 
+/* Page checksumming. */
+static uint64 tempbuf[BLCKSZ/sizeof(uint64)];
+extern bool page_checksum;
+
+#d

Re: [HACKERS] JSON for PG 9.2

2011-12-17 Thread Dimitri Fontaine
Hi,

Peter Eisentraut  writes:
> The way forward here is to maintain this as an extension, provide debs
> and rpms, and show that that is maintainable.  I can see numerous
> advantages in maintaining a PL outside the core; especially if you are
> still starting up and want to iterate quickly.

I'd like to add some confusion on the implementation choice, because it
looks damn too easy now… Guile 2.0 offers an implementation of the
ECMAscript language and plscheme already exists as a PostgreSQL PL
extension for integrating with Guile.

  http://plscheme.projects.postgresql.org/

  http://wingolog.org/archives/2009/02/22/ecmascript-for-guile
  http://packages.debian.org/sid/guile-2.0

  http://www.gnu.org/software/guile/
  
  Guile is an extension language platform

  Guile is an efficient virtual machine that executes a portable
  instruction set generated by its optimizing compiler, and integrates
  very easily with C and C++ application code. In addition to Scheme,
  Guile includes compiler front-ends for ECMAScript and Emacs Lisp
  (support for Lua is underway), which means your application can be
  extended in the language (or languages) most appropriate for your user
  base. And Guile's tools for parsing and compiling are exposed as part
  of its standard module set, so support for additional languages can be
  added without writing a single line of C.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: SP-GiST, Space-Partitioned GiST

2011-12-17 Thread Tom Lane
Teodor Sigaev  writes:
> [ spgist patch ]

I've applied this after a lot of revisions, some cosmetic (better
comments etc), some not so much (less bogus vacuum and WAL handling).

There are still a number of loose ends that need to be worked on:

* The question of whether to store nulls so SPGiST can support
full-index scans.  I'm not convinced this is worth doing, because a
full-index scan is likely to suck performance-wise in SPGiST anyway.
But if it's going to be done then IMO it needs to happen for 9.2,
because it will affect both on-disk data and the opclass API.

* Supporting index-only scans.  This I think is worth doing, and I
plan to go do it in the next few days.  However, this is going to
complicate any desire to support full-index scans, because right now
value reconstruction is done by inner_consistent and leaf_consistent,
so there's no way to do it unless there's a qual to check.

* Perhaps it'd be a good idea to move the loop over scankeys to inside
the opclass consistent methods, ie call them just once to check all the
scankeys.  Then we could meaningfully define zero scankeys as a full
index scan, and we would also get rid of redundant value reconstruction
work when there's more than one scankey.  (Though I'm not sure
multiple-scankey cases will occur often enough to make that really worth
worrying about.)

* SPGiST inserts XIDs into redirect tuples so that it can tell when it's
safe for VACUUM to delete them.  This is similar to something btree
does, and as I recall, that was a headache for hot standby.  So probably
SPGiST also needs some work to be safe for hot standby.

* The index free space management seems pretty questionable.  I can
understand the use of the small local cache of recently-used pages in
each backend, but I'm much less than convinced that it's a good idea
to share that across backends through the metapage.  It's too small
for that.  I think the code needs to exploit the FSM a lot more.  For
one thing, AFAICS only *completely empty* pages will ever get added to
FSM by VACUUM, which means that once the backend that initialized a
given page has dropped it from its (tiny) local cache, it's basically
impossible for that page to ever receive any additions except as splits
from its existing entries.  Perhaps that's intentional but it seems
likely to lead to poor space utilization.

* It bothers me a bit that the config function gets called on every
single operation.  Maybe the rd_amcache struct should be used to cache
the config function results (as well as the datatype lookup results).
On the other hand, I have no proof that that's a noticeable time sink.

* It occurs to me that it might be worth sorting the ScanStackEntry
list by block number after each inner-tuple visit, so as to improve
locality of access.  I don't have any evidence for that either,
though.

* I pulled out the spgstat() function because it seemed like something
that didn't belong in core.  If we're going to have it at all, it should
be in contrib/pgstattuple.  I'm willing to do the work to put it there,
but I wonder if anyone has any thoughts about whether to keep its
existing API (return a text value) or change it to look more like
pgstatindex(), ie return a set of columns.  I'd favor the latter if
I thought the set of columns was well-defined, but I'm not sure it is.
(As a reminder, I've attached the last state of the function before
I pulled it out.)

regards, tom lane



Datum
spgstat(PG_FUNCTION_ARGS)
{
text   *name = PG_GETARG_TEXT_P(0);
RangeVar   *relvar;
Relationindex;
BlockNumber blkno;
BlockNumber totalPages = 0,
innerPages = 0,
leafPages = 0,
emptyPages = 0,
deletedPages = 0;
double  usedSpace = 0.0;
charres[1024];
int bufferSize = -1;
int64   innerTuples = 0,
leafTuples = 0,
nAllTheSame = 0,
nLeafPlaceholder = 0,
nInnerPlaceholder = 0,
nLeafRedirect = 0,
nInnerRedirect = 0;

#define IS_INDEX(r) ((r)->rd_rel->relkind == RELKIND_INDEX)
#define IS_SPGIST(r) ((r)->rd_rel->relam == SPGIST_AM_OID)

relvar = makeRangeVarFromNameList(textToQualifiedNameList(name));
index = relation_openrv(relvar, AccessExclusiveLock);

if (!IS_INDEX(index) || !IS_SPGIST(index))
elog(ERROR, "relation \"%s\" is not an SPGiST index",
 RelationGetRelationName(index));

totalPages = RelationGetNumberOfBlocks(index);

for (blkno = SPGIST_HEAD_BLKNO; blkno < totalPages; blkno++)
{
Buffer  buffer;
Pagepage;
int pageFree;

buffer = ReadBuffer(index, blkno);
LockBuffer(buffer, BUFFER_LOCK_SHARE);

page = BufferGetPage(buffer);

if (PageIsNew(page) || SpGistPageIsDeleted(page))
{
deletedPages++;
UnlockReleaseBuffer(buffer);
   

Re: [HACKERS] JSON for PG 9.2

2011-12-17 Thread Robert Haas
On Sat, Dec 17, 2011 at 5:02 PM, Dimitri Fontaine
 wrote:
> I'd like to add some confusion on the implementation choice, because it
> looks damn too easy now… Guile 2.0 offers an implementation of the
> ECMAscript language and plscheme already exists as a PostgreSQL PL
> extension for integrating with Guile.

It seems like the licensing there could potentially be problematic.
It's GPL with a linking exception.  Not sure we want to go there.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2011-12-17 Thread David E. Wheeler
On Dec 17, 2011, at 3:53 AM, Simon Riggs wrote:

> Which looks very good.

Love having the start here. I forwarded this message to Claes Jakobsson, 
creator of the jansson-using pg-json extension. He’s a bit less supportive. He 
gave me permission to quote him here:

> Frankly I see the inclusion of a JSON datatype in core as unnecessary. Stuff 
> should be moved out of core rather than in, as we do in Perl. Also, does this 
> patch mean that the 'json' type is forever claimed and can't be replaced by 
> extensions?
> 
> There's little reason to reimplement JSON parsing, comparision and other 
> routines when there's a multitude of already good libraries.

Personally, I think that there really should be a core key/value-type data 
type, and json is probably the best possible choice, absent a 
SQL-standard-mandated type (which would probably suck anyway). But I think it 
worthwhile to hear alternate points of view, and this isn't far from what Jan 
said last week.

Best,

David
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2011-12-17 Thread Robert Haas
On Sat, Dec 17, 2011 at 7:50 PM, David E. Wheeler  wrote:
> Love having the start here. I forwarded this message to Claes Jakobsson, 
> creator of the jansson-using pg-json extension. He’s a bit less supportive. 
> He gave me permission to quote him here:
>
>> Frankly I see the inclusion of a JSON datatype in core as unnecessary. Stuff 
>> should be moved out of core rather than in, as we do in Perl. Also, does 
>> this patch mean that the 'json' type is forever claimed and can't be 
>> replaced by extensions?
>>
>> There's little reason to reimplement JSON parsing, comparision and other 
>> routines when there's a multitude of already good libraries.

That's fair enough, but we've had *many* requests for this
functionality in core, I don't see what we lose by having at least
some basic functionality built in.  There is always room for people to
provide extensions and add-ons that build on whatever core support we
provide.  There must be an order of magnitude more demand for this
data type than there is for any other potential new in-core data type.
 Off the top of my head, I can't think of anything else that's even in
the same league; can you?

As for whether to use somebody else's implementation or roll our own,
I'm not convinced there's any value in reusing somebody else's
implementation.  Consider a library like json-c, just the first thing
I happened to download.  The license is compatible, so that's good.
But the coding style is completely different from ours, the memory
management is not compatible with ours (it uses calloc and it's not
pluggable), the error messages don't follow our style guidelines and
won't work with our translation infrastructure, it has its own
printfbuf which is redundant with our StringInfoData, it has its own
hash table implementation which is also redundant with code we already
have, and of course it won't contain anything like
CHECK_FOR_INTERRUPTS() any place that might be needed.   Now, sure,
all of those problems are fixable.  But by the time you get done it's
not any less work than writing your own, and probably not as well
adapted to our particular needs.

The jansson library has a pluggable memory allocator, but most of the
other complaints above still apply; and I see, for example, that it
contains its own UTF-8 validator and locale conversion routines, which
is undoubtedly not what we want.  jansson also interprets numeric
values as either a native integer or floating point values, which
limits the amount of precision available and means that values may be
output in a manner quite different from how they were put in.
Although this is probably legal, since RFC4627 states that JSON
parsers may set limits on the range of numbers, I think it's an
unnecessary and undesirable limitation.  I have always enjoyed the
ability to -- for example -- SELECT 400! in PostgreSQL and get an
exact answer, and I think it would be nice if I could store the result
in a JSON object.  I am also quite certain that someone will propose
(or, perhaps, actually write) a function to convert a record to a JSON
object, and I think it would be mighty nice if numeric, int4, and int8
could be transformed into JSON numbers rather than JSON strings, which
isn't going to work - at least for numeric - if there are significant
range or precision limitations.

For XML, it makes a lot of sense for us to integrate with an external
library.  Consider libxml2, the library we actually do integrate with.
 The root directory has over 275,000 lines of code in .c and .h files.
 Obviously, it's worth suffering through some pain (and we definitely
have suffered through some pain) to avoid having to rewrite some
substantial portion of that code.  Providing some basic JSON support
figures to require about two orders of magnitude less code, which IMHO
makes the calculation completely different.  The reason there are so
many JSON libraries out there is because it only takes a day to write
one.  If you look at a couple of JSON parsers written by other people
and don't find exactly what you're looking for, you just go write one
of your own.  You then have the option to hang out a shingle and
critique the next three people who come along and do the same thing,
but is that really justified?  Odds are good that their reasons for
rolling their own were just as good as yours.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2011-12-17 Thread Tom Lane
Robert Haas  writes:
> On Sat, Dec 17, 2011 at 7:50 PM, David E. Wheeler  
> wrote:
>> Love having the start here. I forwarded this message to Claes Jakobsson, 
>> creator of the jansson-using pg-json extension. He’s a bit less supportive. 
>> He gave me permission to quote him here:

>>> Frankly I see the inclusion of a JSON datatype in core as unnecessary. 
>>> Stuff should be moved out of core rather than in, as we do in Perl. Also, 
>>> does this patch mean that the 'json' type is forever claimed and can't be 
>>> replaced by extensions?
>>> There's little reason to reimplement JSON parsing, comparision and other 
>>> routines when there's a multitude of already good libraries.

> That's fair enough, but we've had *many* requests for this
> functionality in core, I don't see what we lose by having at least
> some basic functionality built in.  There is always room for people to
> provide extensions and add-ons that build on whatever core support we
> provide.

Well, I think that that's exactly the question here: if we do something
in core, will it foreclose options for people who want to do add-ons?

The main thing that's troubling me there is the issue of plain text
representation versus something precompiled (and if the latter, what
exactly).  I don't see how you "build on" a core datatype that makes a
decision different from what you wanted for that.

I'm also +1 to Claes' opinion that this can be perfectly well managed as
an add-on.  We've sweated blood over many years to make PG extensions
work nicely, and they now work better than they ever have.  It's not
clear to me why the push to make something core when it can obviously be
an extension.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2011-12-17 Thread David E. Wheeler
On Dec 17, 2011, at 7:40 PM, Tom Lane wrote:

> Well, I think that that's exactly the question here: if we do something
> in core, will it foreclose options for people who want to do add-ons?

Why would it? They would just have to use a different name.

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2011-12-17 Thread Robert Haas
On Sat, Dec 17, 2011 at 10:42 PM, David E. Wheeler  wrote:
> On Dec 17, 2011, at 7:40 PM, Tom Lane wrote:
>> Well, I think that that's exactly the question here: if we do something
>> in core, will it foreclose options for people who want to do add-ons?
>
> Why would it? They would just have to use a different name.

Yeah, exactly.  Or for that matter, the same name in a different
schema.  And as for the question of text vs. binary, that's going to
be two separate data types whether it gets done in core or elsewhere.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow substitute allocators for PGresult.

2011-12-17 Thread Tom Lane
Greg Smith  writes:
> On 12/01/2011 05:48 AM, Kyotaro HORIGUCHI wrote:
>> xfer timePeak RSS
>> Original : 6.02s 850MB
>> libpq patch + Original dblink: 6.11s 850MB
>> full patch   : 4.44s 643MB

> These look like interesting results.  Currently Tom is listed as the 
> reviewer on this patch, based on comments made before the CF really 
> started.  And the patch has been incorrectly been sitting in "Waiting 
> for author" for the last week; oops.  I'm not sure what to do with this 
> one now except raise a general call to see if anyone wants to take a 
> look at it, now that it seems to be in good enough shape to deliver 
> measurable results.

I did list myself as reviewer some time ago, but if anyone else wants to
take it I won't be offended ;-)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to allow users to kill their own queries

2011-12-17 Thread Tom Lane
Robert Haas  writes:
> On Fri, Dec 16, 2011 at 1:21 AM, Greg Smith  wrote:
>> ... If you assume someone can run through all the
>> PIDs between those checks and the kill, the system is already broken that
>> way.

> From a theoretical point of view, I believe it to be slightly
> different.  If a superuser sends a kill, they will certainly be
> authorized to kill whatever they end up killing, because they are
> authorized to kill anything.  On the other hand, the proposed patch
> would potentially result - in the extremely unlikely event of a
> super-fast PID wraparound - in someone cancelling a query they
> otherwise wouldn't have been able to cancel.

> In practice, the chances of this seem fairly remote.

I think this argument is bogus: if this is a real issue, then no use of
kill() anytime, by anyone, is safe.  In practice I believe that Unix
systems avoid recycling PIDs right away so as to offer some protection.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command Triggers

2011-12-17 Thread Bruce Momjian
Dimitri Fontaine wrote:
> Here again, trying to generalize before we have anything useful is a
> recipe for failure. I concur that ?Process Utility Top-Level Only
> Command Triggers? is a pretty limited feature in scope, yet that's what
> I want to obtain here, and I think it's useful enough on its own.
> 
> If you disagree, please propose a user level scheme where we can fit the
> work I'm doing so that I can adapt my patch and implement a part of your
> scheme in a future proof way.  I'm ready to do that even when I have no
> need for what you're talking about, if that's what it takes.

We have a big user community and what _you_ want for this feature is
only a small part of our decision on what is needed.  Robert's concern
that this might not be useful enough for the general use-cases people
want is a legitimate, if difficult to hear, analysis.

The resistance we get when removing features is sobering.  Imagine the
worst case, e.g. xml2, where we rightly implement it later, but there is
something that is better done with the old interface, and we end up
having to support both.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers