Re: [HACKERS] SSI patch version 14

2011-02-08 Thread Kevin Grittner
Heikki Linnakangas  wrote:
 
 Ok, committed.
 
I see that at least three BuildFarm critters don't have UINT64_MAX
defined.  Not sure why coypu is running out of connections.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing COPY API

2011-02-08 Thread Itagaki Takahiro
On Tue, Feb 8, 2011 at 09:38, Andrew Dunstan and...@dunslane.net wrote:
 Here is a patch against the latest revision of file_fdw to exercise this
 API. It includes some regression tests, and I think apart from one or two
 small details plus a requirement for documentation, is complete.

The patch contains a few fixes for typo in the original patch.
Hanada-san, could you take them into the core file_fdw patch?

   CREATE FOREIGN TABLE jagged_text (
        t   text[]
   ) SERVER file_server
   OPTIONS (format 'csv', filename
   '/home/andrew/pgl/pg_head/contrib/file_fdw/data/jagged.csv', header
   'true', textarray 'true');

There might be another approach -- we could have jagged_file_fdw aside
from file_fdw, because we cannot support some features in textarray mode
like force_not_null option and multiple non-text[] columns.

I'll include NextCopyFromRawFields() in COPY API patch to complete
raw_fields support in CopyState. (Or, we should also revert changes
related to raw_fields.)  However, we'd better postpone jagged csv
support to 9.2. The design is still under discussion.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PostgreSQL FDW update

2011-02-08 Thread Heikki Linnakangas
I needed something to test the FDW API patch with, and didn't want to 
get involved in the COPY API changes, and also wanted to have something 
that needs real connection management and can push down quals. So I 
updated the postgresql_fdw patch to work with the latest FDW patch.


Here. It's a bit of a mess, but it works for simple queries..

It requires a small change to the FDW api 
(fdw-api-add-serverid-userid.patch). I added server oid and user oid 
fields to the FdwPlan - that seems like basic information that most 
FDW's will need, so it seems awkward to require the FDW to wrap them in 
Const nodes and a List.


These are also available in my git repository at 
git://git.postgresql.org/git/users/heikki/postgres.git, branches fdw2 
and postgresql_fdw.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
diff --git a/contrib/postgresql_fdw/postgresql_fdw.c b/contrib/postgresql_fdw/postgresql_fdw.c
index cd7902d..17e3856 100644
--- a/contrib/postgresql_fdw/postgresql_fdw.c
+++ b/contrib/postgresql_fdw/postgresql_fdw.c
@@ -3,17 +3,19 @@
  * postgresql_fdw.c
  *		  foreign-data wrapper for PostgreSQL
  *
- * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2011, PostgreSQL Global Development Group
  *
  * IDENTIFICATION
- *		  $PostgreSQL$
+ *		  contrib/postgresql_fdw/postgresql_fdw.c
  *
  *-
  */
 #include postgres.h
 
+#include catalog/pg_namespace.h
 #include catalog/pg_operator.h
 #include catalog/pg_proc.h
+#include foreign/foreign.h
 #include funcapi.h
 #include libpq-fe.h
 #include mb/pg_wchar.h
@@ -40,25 +42,21 @@ extern Datum postgresql_fdw_handler(PG_FUNCTION_ARGS);
 /*
  * FDW routines
  */
-static FSConnection* pgConnectServer(ForeignServer *server, UserMapping *user);
-static void pgFreeFSConnection(FSConnection *conn);
-static void pgEstimateCosts(ForeignPath *path, PlannerInfo *root, RelOptInfo *baserel);
-static void pgOpen(ForeignScanState *scanstate);
-static void pgBeginScan(ForeignScanState *scanstate);
-static void pgIterate(ForeignScanState *scanstate);
-static void pgClose(ForeignScanState *scanstate);
-static void pgReOpen(ForeignScanState *scanstate);
+static FdwPlan *pgPlanRelScan(Oid foreigntableid, PlannerInfo *root,
+			  RelOptInfo *baserel);
+static FdwExecutionState *pgBeginScan(FdwPlan *plan, ParamListInfo params);
+static void pgIterate(FdwExecutionState *state, TupleTableSlot *slot);
+static void pgReScan(FdwExecutionState *state);
+static void pgEndScan(FdwExecutionState *state);
 
 /* helper for deparsing a request into SQL statement */
-static bool is_foreign_qual(Expr *expr);
+
+static bool is_foreign_qual(PlannerInfo *root, RelOptInfo *baserel, Expr *expr);
 static bool foreign_qual_walker(Node *node, void *context);
 static void deparseSelectClause(StringInfo sql, ForeignTable *table, TupleDesc tupdesc, const char *aliasname, bool prefix);
-static void deparseFromClause(StringInfo sql, ForeignTable *table, const char *aliasname, bool prefix);
-static char *deparseSql(ForeignScanState *scanstate);
+static char *deparseSql(Oid foreigntableid, PlannerInfo *root, RelOptInfo *baserel);
 
-/* helper for handling result tuples */
-static void storeResult(Tuplestorestate *tupstore, bool is_sql_cmd,
-		TupleDesc tupdesc, PGresult *res);
+static void storeResult(TupleTableSlot *slot, PGresult *res, int rowno);
 
 /*
  * Connection management
@@ -75,25 +73,23 @@ static void cleanup_connection(ResourceReleasePhase phase,
 /*
  * PostgreSQL specific portion of a foreign query request
  */
-typedef struct pgFdwReply
+typedef struct pgFdwExecutionState
 {
-	char			   *sql;		/* SQL text to be sent to foreign server */
-	Tuplestorestate	   *tupstore;	/* store all of result tuples */
-} pgFdwReply;
+	PGconn *conn;
+	PGresult *res;
+	int		nextrow;
+} pgFdwExecutionState;
 
 /*
  * FdwRoutine of PostgreSQL wrapper
  */
 static FdwRoutine postgresql_fdw_routine =
 {
-	pgConnectServer,
-	pgFreeFSConnection,
-	pgEstimateCosts,
-	pgOpen,
+	pgPlanRelScan,
 	pgBeginScan,
 	pgIterate,
-	pgClose,
-	pgReOpen,
+	pgReScan,
+	pgEndScan
 };
 
 /*
@@ -107,58 +103,18 @@ postgresql_fdw_handler(PG_FUNCTION_ARGS)
 	PG_RETURN_POINTER(postgresql_fdw_routine);
 }
 
-/*
- * Connect to a foreign PostgreSQL server with libpq if necessary.
- */
-static FSConnection *
-pgConnectServer(ForeignServer *server, UserMapping *user)
+static FdwPlan *
+pgPlanRelScan(Oid foreigntableid, PlannerInfo *root, RelOptInfo *baserel)
 {
-	elog(DEBUG3, %s() called for \%s\, __FUNCTION__, server-servername);
-
-	return (FSConnection *) GetConnection(server, user);
-}
+	FdwPlan *plan = makeNode(FdwPlan);
+	char *sql;
 
+	sql = deparseSql(foreigntableid, root, baserel);
 
-/*
- * Disconnect from the foreign server if the connection is not referenced by
- * any other scan.
- */
-static void
-pgFreeFSConnection(FSConnection *conn)
-{
-	elog(DEBUG3, %s() called, __FUNCTION__);
-
-	if 

Re: [HACKERS] [GENERAL] Issues with generate_series using integer boundaries

2011-02-08 Thread Itagaki Takahiro
On Mon, Feb 7, 2011 at 20:38, Thom Brown t...@linux.com wrote:
 Yes, of course, int8 functions are separate.  I attach an updated
 patch, although I still think there's a better way of doing this.

Thanks. Please add the patch to the *current* commitfest
because it's a bugfix.
https://commitfest.postgresql.org/action/commitfest_view?id=9

I've not tested the patch yet, but if we could drop the following
line in the patch, the code could be much cleaner.
  /* ensure first value in series should exist */

 I'm not sure how this should be handled.  Should there just be a check
 for either kind of infinity and return an error if that's the case?  I

Maybe so. It also works if we had infinity on timestamp overflow, but
I've not tested yet.  Anyway, we need similar fix for timestamp versions.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More extension issues: ownership and search_path

2011-02-08 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
  I see no
 argument whatsoever why we should lock down extensions and only extensions
 against this risk.

Spelled this way I can only agree :)

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-08 Thread Heikki Linnakangas

On 08.02.2011 10:43, Kevin Grittner wrote:

Heikki Linnakangas  wrote:


Ok, committed.


I see that at least three BuildFarm critters don't have UINT64_MAX
defined.


I guess we'll have to just #define it ourselves. Or could we just pick 
another magic value, do we actually rely on InvalidSerCommitSeqno being 
higher than all other values anywhere?



 Not sure why coypu is running out of connections.


Hmm, it seems to choose a smaller max_connections value now, 20 instead 
of 30. Looks like our shared memory usage went up by just enough to pass 
that threshold.


Looks like our shared memory footprint grew by about 2MB with default 
configuration, from 37MB to 39MB. That's quite significant. Should we 
dial down the default of max_predicate_locks_per_transaction? Or tweak 
the sizing of the hash tables somehow?


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing COPY API

2011-02-08 Thread Andrew Dunstan



On 02/08/2011 03:49 AM, Itagaki Takahiro wrote:

On Tue, Feb 8, 2011 at 09:38, Andrew Dunstanand...@dunslane.net  wrote:

Here is a patch against the latest revision of file_fdw to exercise this
API. It includes some regression tests, and I think apart from one or two
small details plus a requirement for documentation, is complete.

The patch contains a few fixes for typo in the original patch.
Hanada-san, could you take them into the core file_fdw patch?


   CREATE FOREIGN TABLE jagged_text (
t   text[]
   ) SERVER file_server
   OPTIONS (format 'csv', filename
   '/home/andrew/pgl/pg_head/contrib/file_fdw/data/jagged.csv', header
   'true', textarray 'true');

There might be another approach -- we could have jagged_file_fdw aside
from file_fdw, because we cannot support some features in textarray mode
like force_not_null option and multiple non-text[] columns.

I'll include NextCopyFromRawFields() in COPY API patch to complete
raw_fields support in CopyState. (Or, we should also revert changes
related to raw_fields.)  However, we'd better postpone jagged csv
support to 9.2. The design is still under discussion.



Please do include NextCopyFromRawFields(). I think the API is very 
limiting without it, but very flexible with it.


I also think that it's better to have contrib examples of the use of an 
API than not.


FORCE NOT NULL is much more of an issue for the *non* raw fields case 
than the reverse. In the raw fields case the caller can manage it 
themselves.


Multiple non-text[] columns strikes me as a red herring. This isn't the 
only possible use of NextCopyFromRawFields(), but it is one significant 
(and very useful as it stands) use.


cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing COPY API

2011-02-08 Thread Shigeru HANADA
On Tue, 8 Feb 2011 17:49:09 +0900
Itagaki Takahiro itagaki.takah...@gmail.com wrote:

 On Tue, Feb 8, 2011 at 09:38, Andrew Dunstan and...@dunslane.net wrote:
  Here is a patch against the latest revision of file_fdw to exercise this
  API. It includes some regression tests, and I think apart from one or two
  small details plus a requirement for documentation, is complete.
 
 The patch contains a few fixes for typo in the original patch.
 Hanada-san, could you take them into the core file_fdw patch?

Thanks, I've applied to my local repo.

s/Centinel/Sentinel/
s/Vaidate/Validate/
s/reaind/reading/

Recent file_fdw is available here:
http://git.postgresql.org/gitweb?p=users/hanada/postgres.git;a=summary

I'll submit revised file_fdw patch after removing IsForeignTable()
catalog lookup along Heikki's proposal.

Regards,
--
Shigeru Hanada



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] REVIEW: Determining client_encoding from client locale

2011-02-08 Thread Ibrar Ahmed
Stephen Frost!

I have modified the code to use ADD_STARTUP_OPTION instead of writing code
again.
And  tried the patch on Windows  and Linux and it works for me.




On Sun, Feb 6, 2011 at 10:19 AM, Stephen Frost sfr...@snowman.net wrote:

 Ibrar,

 * Ibrar Ahmed (ibrar.ah...@gmail.com) wrote:
  I have reviewed/tested this patch.

 Great, thanks for that!

  In my point code should be like this
 
   *if (conn-client_encoding_initial 
 conn-client_encoding_initial[0])
 {
 if (packet)
 {
 strcpy(packet + packet_len, client_encoding);
 packet_len += strlen(client_encoding) + 1;
 strcpy(packet + packet_len,
  conn-client_encoding_initial);
 packet_len +=
 strlen(conn-client_encoding_initial) +
  1;
}
  }*

 Makes sense to me, just reading through this email.  Have you tested
 this change..?  Could you provide it as an additional patch or a new
 patch including the change against head, assuming it still works well in
 your testing?

  I will test this patch on Windows and will send results.

 That would be great, it's not easy for me to test under Windows.

Thanks!

Stephen

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.10 (GNU/Linux)

 iEYEARECAAYFAk1O5joACgkQrzgMPqB3kijODgCeN1/PVKf/qzeuWOz82FwpR/B0
 2rMAnR+4tCxNp9eZn7qIOTXqCv70H2oC
 =vYXv
 -END PGP SIGNATURE-




-- 
   Ibrar Ahmed
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 7131fb4..9223b79 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -259,6 +259,21 @@ PGconn *PQconnectdbParams(const char **keywords, const char **values, int expand
  /listitem
 /varlistentry
 
+varlistentry id=libpq-connect-client-encoding xreflabel=client_encoding
+ termliteralclient_encoding/literal/term
+ listitem
+ para
+  This sets the varnameclient_encoding/varname
+  configuration parameter for this connection.  In addition to
+  the values accepted by the corresponding server option, you
+  can use literalauto/literal to determine the right
+  encoding from the current locale in the client
+  (envarLC_CTYPE/envar environment variable on Unix
+  systems).
+ /para
+ /listitem
+/varlistentry
+
 varlistentry id=libpq-connect-options xreflabel=options
  termliteraloptions/literal/term
  listitem
@@ -6355,6 +6370,16 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough)
   linkend=libpq-connect-connect-timeout connection parameter.
  /para
 /listitem
+
+listitem
+ para
+  indexterm
+   primaryenvarPGCLIENTENCODING/envar/primary
+  /indexterm
+  envarPGCLIENTENCODING/envar behaves the same as xref
+  linkend=libpq-connect-client-encoding connection parameter.
+ /para
+/listitem
/itemizedlist
   /para
 
@@ -6391,17 +6416,6 @@ myEventProc(PGEventId evtId, void *evtInfo, void *passThrough)
 listitem
  para
   indexterm
-   primaryenvarPGCLIENTENCODING/envar/primary
-  /indexterm
-  envarPGCLIENTENCODING/envar sets the default client character
-  set encoding.  (Equivalent to literalSET client_encoding TO
-  .../literal.)
- /para
-/listitem
-
-listitem
- para
-  indexterm
primaryenvarPGGEQO/envar/primary
   /indexterm
   envarPGGEQO/envar sets the default mode for the genetic query
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index eacae71..0bf05de 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -593,6 +593,17 @@ $ userinputpsql service=myservice sslmode=require/userinput
 privileges, server is not running on the targeted host, etc.),
 applicationpsql/application will return an error and terminate.
 /para
+
+para
+ If at least one of standard input or standard output are a
+ terminal, then applicationpsql/application sets the client
+ encoding to quoteauto/quote, which will detect the
+ appropriate client encoding from the locale settings
+ (envarLC_CTYPE/envar environment variable on Unix systems).
+ If this doesn't work out as expected, the client encoding can be
+ overridden using the environment
+ variable envarPGCLIENTENCODING/envar.
+/para
   /refsect2
 
   refsect2 id=R2-APP-PSQL-4
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 301dc11..08c979a 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -1478,7 +1478,7 @@ do_connect(char *dbname, char *user, char *host, char *port)
 
 	while (true)
 	{
-#define PARAMS_ARRAY_SIZE	7
+#define PARAMS_ARRAY_SIZE	8
 		const char **keywords = pg_malloc(PARAMS_ARRAY_SIZE * sizeof(*keywords));
 		const char **values = pg_malloc(PARAMS_ARRAY_SIZE * sizeof(*values));
 
@@ 

Re: [HACKERS] [GENERAL] Issues with generate_series using integer boundaries

2011-02-08 Thread Thom Brown
On 8 February 2011 09:22, Itagaki Takahiro itagaki.takah...@gmail.com wrote:
 On Mon, Feb 7, 2011 at 20:38, Thom Brown t...@linux.com wrote:
 Yes, of course, int8 functions are separate.  I attach an updated
 patch, although I still think there's a better way of doing this.

 Thanks. Please add the patch to the *current* commitfest
 because it's a bugfix.
 https://commitfest.postgresql.org/action/commitfest_view?id=9

 I've not tested the patch yet, but if we could drop the following
 line in the patch, the code could be much cleaner.
  /* ensure first value in series should exist */

 I'm not sure how this should be handled.  Should there just be a check
 for either kind of infinity and return an error if that's the case?  I

 Maybe so. It also works if we had infinity on timestamp overflow, but
 I've not tested yet.  Anyway, we need similar fix for timestamp versions.

Well, in its current state, I expect it to get rejected, but I guess
at least it gets a better chance of being looked at.  I've added it to
the commitfest now.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions support for pg_dump, patch v27

2011-02-08 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:

 Attached is an updated patch that incorporates all of the review I've

And that looks great, thanks.  I've only had time to read the patch,
will play with it later on today, hopefully.


I've spotted a comment that I think you missed updating.  The schema
given in the control file is now created in all cases rather than only
when the extension is not relocatable, right?

+   else if (control-schema != NULL)
+   {
+   /*
+* The extension is not relocatable and the author gave us a 
schema
+* for it.  We create the schema here if it does not already 
exist.
+*/

I also note that the attached version doesn't contain \dX, is that an
happenstance or a choice you did here?

 done so far on the core code.  This omits the contrib changes, which
 I've not looked at in any detail, and the docs changes since I've not
 yet updated the docs to match today's code changes.  User-visible
 changes are that WITH NO DATA is gone, and instead there's
 pg_extension_config_dump(regclass, text) as per today's discussion.
 The latter is only lightly tested but seems to work as intended.

I think I will have to test it and get my head around it, but as we
said, it's a good change (simpler, only target user tables).

 I believe the core code is now in pretty good shape; the only open issue
 I have at the moment is that pg_dump will fail to suppress dumping of
 USER MAPPING objects that are in an extension.  Which is something I'm
 not too excited about anyway.  (The reason that's an issue is that I
 reverted most of the changes in pg_dump.c in favor of using pg_dump's
 already existing dependency mechanisms to suppress dumping unwanted
 items.  The USER MAPPING code tries to bypass the DumpableObject
 representation, which means it doesn't get filtered.)

Well the pg_dump support code is very different than the one I did, so I
will have to learn about the dependency management there before I can
comment.

 The documentation still needs a good bit of work, but I hope to have
 this committed within a day or so.

Great!  Again, thanks a lot, I like the way you simplified and cleaned
the patch, and I still recognise the code :)

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: FDW API

2011-02-08 Thread Shigeru HANADA

On Mon, 07 Feb 2011 09:37:37 +0100
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:

 On 07.02.2011 08:00, Shigeru HANADA wrote:
  Sorry for late, attached are revised version of FDW API patches which
  reflect Heikki's comments except removing catalog lookup via
  IsForeignTable().  ISTM that the point is avoiding catalog lookup
  during planning, but I have not found when we can set foreign table
  flag without catalog lookup during RelOptInfo generation.
 
 In get_relation_info(), you do the catalog lookup anyway and have the 
 Relation object at hand. Add a flag to RelOptInfo indicating if it's a 
 foreign table or not, and set that in get_relation_info().

Thanks a lot.

Attached is a revised version of foreign_scan patch.  This still
requires fdw_handler patch which was attached to the orginal post.

Avoid_catalog_lookup.patch is attached for review purpose.
This patch includes changes for this fix.

Regards,
--
Shigeru Hanada


avoid_catalog_lookup.patch
Description: Binary data


20110208-foreign_scan.patch.gz
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
 - Range Types.  This is a large patch which was submitted for the
 first time to the last CommitFest of the cycle, and the first version
 that had no open TODO items was posted yesterday, three-quarters of
 the way through that last CommitFest.  Some good review has been done.
  While more is probably needed, I think we should feel good about
 what's been accomplished and mark this one Returned with Feedback.

I don't agree w/ punting Range Types.  Range Types were discussed as far
back as the 2010 developer meeting, were discussed quite a bit again
starting in October and throughout the fall, and Jeff has regularly
been posting updates to it.  Given how thorough Jeff is, my feeling is
that this patch is more than ready for beta.  My impression is also that
it's not as invasive or destablizing as the others and while it wasn't
being posted to the previous commit fests, it was clearly being worked
on, updated, and improved.

 - synchronous replication.  Based on some looking at this today, I am
 somewhat doubtful about the possibility of me or anyone else beating
 this completely into shape in time for 9.2, unless we choose to extend
 the deadline by several weeks.  Simon said that he would have time to
 finish this in the next two weeks, but, as noted, the CommitFest is
 scheduled to be over in ONE week, and it looks to me like this is
 still pretty rough.  However, there's a lot of good stuff in here, and
 I think it might be practical to get some portion of it committed even
 if we can't agree on all of it.  I recommend we give this one a little
 more time to shake out before giving up on it.

It really would be nice to have this, but I agree that it's pretty late
in the game for it to be in the state is appears to be in. :/  It also
seems to have been stalled for the past couple of months, which doesn't
bode well for it, in my view.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] SSI patch version 14

2011-02-08 Thread Bruce Momjian
Kevin Grittner wrote:
 Heikki Linnakangas  wrote:
  
  Ok, committed.
  
 I see that at least three BuildFarm critters don't have UINT64_MAX
 defined.  Not sure why coypu is running out of connections.

Yes, I am seeing that failures here too on my BSD machine.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Named restore points

2011-02-08 Thread Robert Haas
On Tue, Feb 8, 2011 at 2:05 AM, Jaime Casanova ja...@2ndquadrant.com wrote:
 Finally, this is a nice feature iif we have a way to know what named restore
 points are available. DBAs need to take note of this list (that is not good)
 and the lazy ones will have a hard time to recover the right name (possibly
 with a xlog dump tool).

 So how could we store this information? Perhaps a file in
 $PGDATA/pg_xlog/restore_label that contains the label (and possibly the WAL
 location). Also it must have a way to transmit the restore_label when we add
 another restore point. I didn't implement this part (Jaime?) and it seems as
 important as the new xlog record type that is in the patch. It seems
 complicate but I don't have ideas. Anyone? The restore point names could be
 obtained by querying a function (say, pg_restore_point_names or
 pg_restore_point_list).


 i still think this should be a separate tool or a dba written list,

I agree.  Keeping track of where you've set named restore points is
not going to be a problem with a simple solution.  Which restore
points are available is going to depend on which base backup you
restored and what WAL files you stuffed into pg_xlog.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-08 Thread Kevin Grittner
 Heikki Linnakangas  wrote:
 On 08.02.2011 10:43, Kevin Grittner wrote:
  
 I see that at least three BuildFarm critters don't have UINT64_MAX
 defined.
 
 I guess we'll have to just #define it ourselves. Or could we just
 pick another magic value, do we actually rely on
 InvalidSerCommitSeqno being higher than all other values anywhere?
 
It seemed more robust than a low-end number, based on how it's used.
 
 Not sure why coypu is running out of connections.
 
 Hmm, it seems to choose a smaller max_connections value now, 20
 instead of 30. Looks like our shared memory usage went up by just
 enough to pass that threshold.
 
 Looks like our shared memory footprint grew by about 2MB with
 default configuration, from 37MB to 39MB. That's quite significant.
 Should we dial down the default of
 max_predicate_locks_per_transaction? Or tweak the sizing of the
 hash tables somehow?
 
Dialing down max_predicate_locks_per_transaction could cause the user
to see out of shared memory errors sooner, so I'd prefer to stay
away from that.  Personally, I feel that max_connections is higher
than it should be for machines which would have trouble with the RAM
space, but I suspect I'd have trouble selling the notion that the
default for that should be reduced.
 
The multiplier of 10 PredXactList structures per connection is kind
of arbitrary.  It affects the point at which information is pushed to
the lossy summary, so any number from 2 up will work correctly; it's
a matter of performance and false positive rate.  We might want to
put that on a GUC and default it to something lower.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-general 2011-1-21:] Are there any projects interested in object functionality? (+ rule bases)

2011-02-08 Thread Robert Haas
On Tue, Feb 8, 2011 at 2:23 AM, Nick Rudnick joerg.rudn...@t-online.de wrote:
 (my last two posts seemingly did not reach the HACKERS forum, so please let
 me resend the last one ;-) )

They got here - I think just no one had any further comment.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing COPY API

2011-02-08 Thread Robert Haas
On Tue, Feb 8, 2011 at 4:42 AM, Shigeru HANADA
han...@metrosystems.co.jp wrote:
 On Tue, 8 Feb 2011 17:49:09 +0900
 Itagaki Takahiro itagaki.takah...@gmail.com wrote:

 On Tue, Feb 8, 2011 at 09:38, Andrew Dunstan and...@dunslane.net wrote:
  Here is a patch against the latest revision of file_fdw to exercise this
  API. It includes some regression tests, and I think apart from one or two
  small details plus a requirement for documentation, is complete.

 The patch contains a few fixes for typo in the original patch.
 Hanada-san, could you take them into the core file_fdw patch?

 Thanks, I've applied to my local repo.

 s/Centinel/Sentinel/
 s/Vaidate/Validate/
 s/reaind/reading/

 Recent file_fdw is available here:
 http://git.postgresql.org/gitweb?p=users/hanada/postgres.git;a=summary

 I'll submit revised file_fdw patch after removing IsForeignTable()
 catalog lookup along Heikki's proposal.

So I'm a bit confused.  I don't see the actual copy API change patch
anywhere here.  Are we close to getting something committed there?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Named restore points

2011-02-08 Thread Simon Riggs
On Tue, 2011-02-08 at 08:05 -0500, Robert Haas wrote:
 On Tue, Feb 8, 2011 at 2:05 AM, Jaime Casanova ja...@2ndquadrant.com wrote:
  Finally, this is a nice feature iif we have a way to know what named 
  restore
  points are available. DBAs need to take note of this list (that is not 
  good)
  and the lazy ones will have a hard time to recover the right name (possibly
  with a xlog dump tool).
 
  So how could we store this information? Perhaps a file in
  $PGDATA/pg_xlog/restore_label that contains the label (and possibly the WAL
  location). Also it must have a way to transmit the restore_label when we 
  add
  another restore point. I didn't implement this part (Jaime?) and it seems 
  as
  important as the new xlog record type that is in the patch. It seems
  complicate but I don't have ideas. Anyone? The restore point names could be
  obtained by querying a function (say, pg_restore_point_names or
  pg_restore_point_list).
 
 
  i still think this should be a separate tool or a dba written list,
 
 I agree.  Keeping track of where you've set named restore points is
 not going to be a problem with a simple solution.  Which restore
 points are available is going to depend on which base backup you
 restored and what WAL files you stuffed into pg_xlog.

Yeah agreed. No need for restore_label

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Named restore points

2011-02-08 Thread Simon Riggs
On Fri, 2011-02-04 at 21:15 -0500, Jaime Casanova wrote:
 
  +   else if (recoveryTarget == RECOVERY_TARGET_NAME)
  +   snprintf(buffer, sizeof(buffer),
  +%s%u\t%s\t%s named restore point %
 s\n,
  +(srcfd  0) ?  : \n,
  +parentTLI,
  +xlogfname,
  +recoveryStopAfter ? after :
 before,
  +recoveryStopNamedRestorePoint);
 
  It doesn't matter if it is after or before the restore point.
 After/Before
  only make sense when we're dealing with transaction or time.
 Removed.
 
 
 you're right

Not sure I understand the comment only make sense when we're dealing
with transaction or time.  Why?

At present, I think the ability to stop before/after a named restore
point should be put back.
 
-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] arrays as pl/perl input arguments [PATCH]

2011-02-08 Thread Andrew Dunstan



On 02/03/2011 01:20 PM, Andrew Dunstan wrote:

- Every existing plperl function that takes arrays is going to get
  slower due to the overhead of parsing the string and allocating the
  array and all its elements.
Well, per my understanding of Alex changes, the string parsing is 
not invoked
unless requested by referencing the array in a string context. 
Normally, onle
plperl_ref_from_pg_array will be invoked every time the function is 
called
with an array argument, which would take little time to convert the 
PostgreSQL
internal array representation (not a string) to the perl references, 
but that's
no different from what is already done with composite type 
arguments, which
are converted to perl hash references on every corresponding 
function call.

I'd missed that it was using the internal array representation (obvious
in hindsight) but there's still a significant cost in allocating the SVs
that won't be used by existing code.  Though I agree it's of the same
order as for composite types.



Well, the question seems to be whether or not it's a reasonable price 
to pay. On the whole I'm inclined to think it is, especially when it 
can be avoided by updating your code, which will be a saving in 
fragility and complexity as well.





Tim,

do you till have concerns about this, or are you happy for us to move 
ahead on it?


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread Hitoshi Harada
2011/2/8 Steve Singer ssinger...@sympatico.ca:
 On 11-02-07 10:37 PM, Robert Haas wrote:

 - The PL/python extravaganza.  I'm not really clear where we stand
 with this.  There are a lot of patches here.


 Some of the patches have been committed a few others are ready (or almost
 ready) for a committer.   The table function one  is the only one in
 'waiting for author'

I didn't quite finished my reviews on pl/python patches yet, but it
seems that don't remove argument will be easy to review and unlikely
to have issues. quote functions can be committed already as-is.
table function should be in if Jan sends another patch soon. custom
datatype parser looks premature yet, though we want to give more
feedbacks about its design. I'm not sure about other patches.

 4 of the patches haven't yet received any review. Jan Urbanski has been
 pretty good about posting updated patches as the dependent patches get
 updated.  It would be good if a few people grabbed these. The individual
 patches tend to not be that large.

I agree he did quite a lot of work well. But still some of the patches
are very easy and others is like WIP stage. I hope anyone can review
them as soon as possible.

Regards,

-- 
Hitoshi Harada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread Jan Urbański
On 08/02/11 15:44, Hitoshi Harada wrote:
 2011/2/8 Steve Singer ssinger...@sympatico.ca:
 On 11-02-07 10:37 PM, Robert Haas wrote:

 - The PL/python extravaganza.  I'm not really clear where we stand
 with this.  There are a lot of patches here.


 Some of the patches have been committed a few others are ready (or almost
 ready) for a committer.   The table function one  is the only one in
 'waiting for author'
 
 I didn't quite finished my reviews on pl/python patches yet, but it
 seems that don't remove argument will be easy to review and unlikely
 to have issues. quote functions can be committed already as-is.
 table function should be in if Jan sends another patch soon. custom
 datatype parser looks premature yet, though we want to give more
 feedbacks about its design. I'm not sure about other patches.

From the outstanding PL/Python patches:

 * invalidate composites - it fixes a TODO item and is a general
improvement, although the case being fixed is rather rare. Should be
straightforward to review, it's small and localized.

 * tracebacks - IMHO a big improvement, but should get more testing than
I gave it, as traversing Python stacks tends to be tricky. Still, it's
very localized (basically just getting a traceback string).

 * table functions - I'm working on this one, should have an update today

 * custom datatype parsers - more of a PoC, and because the discussion
about hstores in PL/Python died down I decided to go ahead and send a
patch implementing the last idea from that thread. I'm fine with it not
making 9.1 because this should be designed carefully, and will affect
decisions similar to those that are now being taken WRT arrays in PL/Perl.

 * custom SPI exceptions - I'd really like this one to go in, because it
allows writing UPSERT-kind functions in PL/Python very easily, and it's
just a handful of lines of code

 * don't remove arguments - a bugfix, really, and a very small one

So from the above, I'd say custom datatype parsers could get rejected if
noone feels like having a discussion about it for 9.1. Table functions,
custom SPI exceptions and tracebacks are niceties that if postponed to
9.2 will just mean that many features less in 9.1. The rest is bordering
on bugfixes, and I think should go in.

Cheers,
Jan

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] arrays as pl/perl input arguments [PATCH]

2011-02-08 Thread Alexey Klyukin

On Feb 6, 2011, at 9:43 AM, Alex Hunsaker wrote:

 On Thu, Feb 3, 2011 at 18:29, Alex Hunsaker bada...@gmail.com wrote:
 On Mon, Jan 31, 2011 at 01:34, Alexey Klyukin al...@commandprompt.com 
 wrote:
 I've looked at the patch and added a test for arrays exceeding or equal 
 maximum dimensions to check, whether the recursive function won't bring 
 surprises there. I've also added check_stack_depth calls to both 
 split_array and plperl_hash_from_tuple. Note that the regression fails 
 currently due to the incorrect error reporting in
 PostgreSQL, per 
 http://archives.postgresql.org/pgsql-hackers/2011-01/msg02888.php.
 
 The v5 is attached.
 
 One thing I find odd is we allow for nested arrays, but not nested
 composites.  For example:
 ...
 On the other end, the same is true when returning. If you try to
 return a nested composite or array, the child better be a string as it
 wont let you pass a hash:
 
 So here is a v6 that does the above. It does so by providing a generic
 plperl_sv_to_datum() function and converting the various places to use
 it. One cool thing is you can now use the composite types or arrays
 passed in (or returned from another spi!) as arguments for
 spi_exec_prepared(). Before you would have to convert the hashes to
 strings. It also provides a real plperl_convert_to_pg_array (now
 plperl_array_to_datum) that can handle composite and other random
 datatypes. This also means we don't have to convert arrays to a string
 representation first. (which removes part of the argument for #5 @
 http://archives.postgresql.org/pgsql-hackers/2011-02/msg00197.php)
 
 I have attached a stand alone version of the above so its easier to
 look over. The diffstat is fairly small (ignoring added regression
 tests)
 1 files changed, 259 insertions(+), 165 deletions(-)
 
 Comments?

Thanks, looks great to me. It passes all the tests on my OS X system. I wonder
what's the purpose of the amagic_call in get_perl_array_ref, instead of
calling newRV_noinc on the target SV * ?

Also, in array_to_datum (line 1050), if get_perl_array_ref returns NULL for
the first element of the corresponding dimension, but non-NULL for the second
one - it would use uninitialized dims[cur_depth] value in comparison (which,
although, would probably lead to the desired comparison result).

/A
--
Alexey Klyukin
The PostgreSQL Company - Command Prompt, Inc.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions support for pg_dump, patch v27

2011-02-08 Thread Tom Lane
Dimitri Fontaine dimi...@2ndquadrant.fr writes:
 I've spotted a comment that I think you missed updating.  The schema
 given in the control file is now created in all cases rather than only
 when the extension is not relocatable, right?

Hm, no, that logic is the same as before no?

 I also note that the attached version doesn't contain \dX, is that an
 happenstance or a choice you did here?

I removed it --- it wasn't documented and I didn't see much point anyway
in a \d command that just duplicates a system view, especially a view
only usable by superusers.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-02-08 Thread Robert Haas
On Mon, Feb 7, 2011 at 11:52 PM, Noah Misch n...@leadboat.com wrote:
 On Mon, Feb 07, 2011 at 11:16:18PM -0500, Robert Haas wrote:
 So
 can we just get rid of should_be_detoasted, and have exec_eval_datum()
 or its callers instead test:

 !var-isnull  var-datatype-typbyval  var-datatype-typlen == -1
  VARATT_IS_EXTENDED(var-value)

 FWIW, this is what I meant by option 2 in my summary.

 I haven't tested this, but it's not clear that'd be measurably slower
 than checking a single Boolean.

 Pavel benchmarked this or something close, measuring a performance loss:
 http://archives.postgresql.org/message-id/aanlktikdhekc9r38w2ttzomdr8vdavanr3lhqfjke...@mail.gmail.com

 Tom also expressed concern over performance:
 http://archives.postgresql.org/message-id/24266.1295462...@sss.pgh.pa.us

 Not sure what's next.

Well, Pavel's subsequent reply suggested that he didn't test exactly
this thing, so maybe there's hope.

Or maybe not.  If Tom thought one branch inside exec_eval_datum() was
going to be too expensive, four isn't going to be better.

But I think we're out of time to work on this for this cycle.  Even if
my latest idea is brilliant (and it may not be), we still have to test
it in a variety of cases and get consensus on it, which seems like
more than we can manage right now.  I think it's time to mark this one
Returned with Feedback, or perhaps Rejected would be more accurate in
this instance.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-02-08 Thread Noah Misch
On Tue, Feb 08, 2011 at 08:00:42AM +0100, Pavel Stehule wrote:
 2011/2/8 Noah Misch n...@leadboat.com:
  On Mon, Feb 07, 2011 at 11:16:18PM -0500, Robert Haas wrote:
  So
  can we just get rid of should_be_detoasted, and have exec_eval_datum()
  or its callers instead test:
 
  !var-isnull  var-datatype-typbyval  var-datatype-typlen == -1
   VARATT_IS_EXTENDED(var-value)
 
  FWIW, this is what I meant by option 2 in my summary.
 
  I haven't tested this, but it's not clear that'd be measurably slower
  than checking a single Boolean.
 
  Pavel benchmarked this or something close, measuring a performance loss:
  http://archives.postgresql.org/message-id/aanlktikdhekc9r38w2ttzomdr8vdavanr3lhqfjke...@mail.gmail.com
 
 I tested this in situation when Datum is detoasted on usage, not in
 assignment. So impact will be less.

Robert spoke of doing it on usage (exec_eval_datum()) too, though.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduce the amount of data loss on the standby

2011-02-08 Thread Simon Riggs
On Sat, 2011-01-15 at 07:31 +0900, Fujii Masao wrote:

 I propose the patch which reduces the amount of data loss
 on the standby.

Committed. Well spotted.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-08 Thread Dan Ports
On Tue, Feb 08, 2011 at 11:25:34AM +0200, Heikki Linnakangas wrote:
 On 08.02.2011 10:43, Kevin Grittner wrote:
  I see that at least three BuildFarm critters don't have UINT64_MAX
  defined.
 
 I guess we'll have to just #define it ourselves. Or could we just pick 
 another magic value, do we actually rely on InvalidSerCommitSeqno being 
 higher than all other values anywhere?

As far as I know we don't specifically rely on that anywhere, and
indeed I did have it #defined to 1 at one point (with the other
constants adjusted to match) and I don't recall any problems. But given
that we most often use InvalidSerCommitSeqNo to mean not committed
yet, it made more sense to set it to UINT64_MAX so that if a
comparison did sneak in it would do the right thing.

I did dust off a copy of the ANSI standard at the time, and it was
pretty explicit that UINT64_MAX is supposed to be defined in stdint.h.
But that may just be a C99 requirement (I didn't have an older copy of
the standard), and it's obviously no guarantee that it actually is
defined.

Dan

-- 
Dan R. K. Ports  MIT CSAILhttp://drkp.net/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add ENCODING option to COPY

2011-02-08 Thread Peter Eisentraut
On fre, 2011-02-04 at 10:47 -0500, Tom Lane wrote:
 The reason that we use quotes in CREATE DATABASE is that encoding
 names aren't assumed to be valid SQL identifiers.  If this patch isn't
 following the CREATE DATABASE precedent, it's the patch that's wrong,
 not CREATE DATABASE.

Since encoding names are built-in and therefore well known, and the
names have been aligned with the SQL standard names, which are
identifiers, I don't think this argument is valid (anymore).

It probably shouldn't be changed inconsistently as part of an unrelated
patch, but I think the idea has merit.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-02-08 Thread Pavel Stehule
2011/2/8 Noah Misch n...@leadboat.com:
 On Tue, Feb 08, 2011 at 08:00:42AM +0100, Pavel Stehule wrote:
 2011/2/8 Noah Misch n...@leadboat.com:
  On Mon, Feb 07, 2011 at 11:16:18PM -0500, Robert Haas wrote:
  So
  can we just get rid of should_be_detoasted, and have exec_eval_datum()
  or its callers instead test:
 
  !var-isnull  var-datatype-typbyval  var-datatype-typlen == -1
   VARATT_IS_EXTENDED(var-value)
 
  FWIW, this is what I meant by option 2 in my summary.
 
  I haven't tested this, but it's not clear that'd be measurably slower
  than checking a single Boolean.
 
  Pavel benchmarked this or something close, measuring a performance loss:
  http://archives.postgresql.org/message-id/aanlktikdhekc9r38w2ttzomdr8vdavanr3lhqfjke...@mail.gmail.com

 I tested this in situation when Datum is detoasted on usage, not in
 assignment. So impact will be less.

 Robert spoke of doing it on usage (exec_eval_datum()) too, though.


I am blind, sorry

Regards

Pavel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-08 Thread Kevin Grittner
Dan Ports d...@csail.mit.edu wrote: 
 On Tue, Feb 08, 2011 at 11:25:34AM +0200, Heikki Linnakangas
wrote:
 On 08.02.2011 10:43, Kevin Grittner wrote:
 I see that at least three BuildFarm critters don't have
 UINT64_MAX defined.
 
 I guess we'll have to just #define it ourselves. Or could we just
 pick another magic value, do we actually rely on
 InvalidSerCommitSeqno being higher than all other values
 anywhere?
 
 As far as I know we don't specifically rely on that anywhere, and
 indeed I did have it #defined to 1 at one point (with the other
 constants adjusted to match) and I don't recall any problems. But
 given that we most often use InvalidSerCommitSeqNo to mean not
 committed yet, it made more sense to set it to UINT64_MAX so that
 if a comparison did sneak in it would do the right thing.
 
 I did dust off a copy of the ANSI standard at the time, and it was
 pretty explicit that UINT64_MAX is supposed to be defined in
 stdint.h. But that may just be a C99 requirement (I didn't have
 an older copy of the standard), and it's obviously no guarantee
 that it actually is defined.
 
Attached is something which will work.  Whether people prefer this
or a definition of UINT64_MAX in some header file (if it's missing)
doesn't matter much to me.
 
At any rate, if someone commits this one-liner, three critters
should go back to green.
 
-Kevin

*** a/src/include/storage/predicate_internals.h
--- b/src/include/storage/predicate_internals.h
***
*** 33,39  typedef uint64 SerCommitSeqNo;
   *  at that point.  It's earlier than all normal sequence numbers,
   *  and is only used by recovered prepared transactions
   */
! #define InvalidSerCommitSeqNo UINT64_MAX
  #define RecoverySerCommitSeqNo((SerCommitSeqNo) 1)
  #define FirstNormalSerCommitSeqNo ((SerCommitSeqNo) 2)
  
--- 33,39 
   *  at that point.  It's earlier than all normal sequence numbers,
   *  and is only used by recovered prepared transactions
   */
! #define InvalidSerCommitSeqNo ((SerCommitSeqNo) 
UINT64CONST(0x))
  #define RecoverySerCommitSeqNo((SerCommitSeqNo) 1)
  #define FirstNormalSerCommitSeqNo ((SerCommitSeqNo) 2)
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add ENCODING option to COPY

2011-02-08 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On fre, 2011-02-04 at 10:47 -0500, Tom Lane wrote:
 The reason that we use quotes in CREATE DATABASE is that encoding
 names aren't assumed to be valid SQL identifiers.  If this patch isn't
 following the CREATE DATABASE precedent, it's the patch that's wrong,
 not CREATE DATABASE.

 Since encoding names are built-in and therefore well known, and the
 names have been aligned with the SQL standard names, which are
 identifiers, I don't think this argument is valid (anymore).

What about UTF-8?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-02-08 Thread Pavel Stehule
2011/2/8 Robert Haas robertmh...@gmail.com:
 On Mon, Feb 7, 2011 at 11:52 PM, Noah Misch n...@leadboat.com wrote:
 On Mon, Feb 07, 2011 at 11:16:18PM -0500, Robert Haas wrote:
 So
 can we just get rid of should_be_detoasted, and have exec_eval_datum()
 or its callers instead test:

 !var-isnull  var-datatype-typbyval  var-datatype-typlen == -1
  VARATT_IS_EXTENDED(var-value)

 FWIW, this is what I meant by option 2 in my summary.

 I haven't tested this, but it's not clear that'd be measurably slower
 than checking a single Boolean.

 Pavel benchmarked this or something close, measuring a performance loss:
 http://archives.postgresql.org/message-id/aanlktikdhekc9r38w2ttzomdr8vdavanr3lhqfjke...@mail.gmail.com

 Tom also expressed concern over performance:
 http://archives.postgresql.org/message-id/24266.1295462...@sss.pgh.pa.us

 Not sure what's next.

 Well, Pavel's subsequent reply suggested that he didn't test exactly
 this thing, so maybe there's hope.

 Or maybe not.  If Tom thought one branch inside exec_eval_datum() was
 going to be too expensive, four isn't going to be better.

 But I think we're out of time to work on this for this cycle.  Even if
 my latest idea is brilliant (and it may not be), we still have to test
 it in a variety of cases and get consensus on it, which seems like
 more than we can manage right now.  I think it's time to mark this one
 Returned with Feedback, or perhaps Rejected would be more accurate in
 this instance.

if you have a briliant idea, then, please, send a path :). There was
more ideas, and I am little bit lost.

I'll have a  time on weekend, and I can do some tests.

Regards

Pavel Stehule




 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-08 Thread Kevin Grittner
I wrote:
 
 The multiplier of 10 PredXactList structures per connection is
 kind of arbitrary.  It affects the point at which information is
 pushed to the lossy summary, so any number from 2 up will work
 correctly; it's a matter of performance and false positive rate. 
 We might want to put that on a GUC and default it to something
 lower.
 
If the consensus is that we want to add this knob, I can code it up
today.  If we default it to something low, we can knock off a large
part of the 2MB increase in shared memory used by SSI in the default
configuration.  For those not using SERIALIZABLE transactions the
only impact is that less shared memory will be reserved for
something they're not using.  For those who try SERIALIZABLE
transactions, the smaller the number, the sooner performance will
start to drop off under load -- especially in the face of a
long-running READ WRITE transaction.  Since it determines shared
memory allocation, it would have to be a restart-required GUC.
 
I do have some concern that if this defaults to too low a number,
those who try SSI without bumping it and restarting the postmaster
will not like the performance under load very much.  SSI performance
would not be affected by a low setting under light load when there
isn't a long-running READ WRITE transaction.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-08 Thread Heikki Linnakangas

On 08.02.2011 17:50, Kevin Grittner wrote:

Attached is something which will work.  Whether people prefer this
or a definition of UINT64_MAX in some header file (if it's missing)
doesn't matter much to me.

At any rate, if someone commits this one-liner, three critters
should go back to green.


Thanks, committed.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-08 Thread Robert Haas
On Mon, Feb 7, 2011 at 5:16 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On Mon, 2011-02-07 at 11:50 -0800, Josh Berkus wrote:
  I just spoke to my manager at EnterpriseDB and he cleared my schedule
  for the next two days to work on this.  So I'll go hack on this now.
  I haven't read the patch yet so I don't know for sure how quite I'll
  be able to get up to speed on it, so if someone who is more familiar
  with this code wants to grab the baton away from me, feel free.
  Otherwise, I'll see what I can do with it.
 
  Presumably you have a reason for declaring war? I'm sorry for that, I
  really am.

 How is clearing out his whole schedule to help review  fix the patch
 declaring war?  You have an odd attitude towards assistance, Simon.

 It seems likely that Robert had not read my reply where I said I had
 time to work on this project before posting. In that case, I withdraw my
 comments and apologise.

I did read it, but I still don't think saying I'm going to work on a
patch that's been stalled for weeks - or really months - constitutes
any sort of declaration of war, peace, or anything else.  I have just
as much of a right to work on a given feature as you or anyone else
does.  Typically, when I work on patches and help get them committed,
the response is thanks.  I'm not so sure what's different in this
case.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread Chris Browne
sfr...@snowman.net (Stephen Frost) writes:
 * Robert Haas (robertmh...@gmail.com) wrote:
 - Range Types.  This is a large patch which was submitted for the
 first time to the last CommitFest of the cycle, and the first version
 that had no open TODO items was posted yesterday, three-quarters of
 the way through that last CommitFest.  Some good review has been done.
  While more is probably needed, I think we should feel good about
 what's been accomplished and mark this one Returned with Feedback.

 I don't agree w/ punting Range Types.  Range Types were discussed as
 far back as the 2010 developer meeting, were discussed quite a bit
 again starting in October and throughout the fall, and Jeff has
 regularly been posting updates to it.  Given how thorough Jeff is, my
 feeling is that this patch is more than ready for beta.  My impression
 is also that it's not as invasive or destablizing as the others and
 while it wasn't being posted to the previous commit fests, it was
 clearly being worked on, updated, and improved.

I generally mirror those thoughts.  Range Types don't seem invasive or
destabilizing, and the code base has been deployed for quite some time
as an extension (not quite contrib).  It would be disappointing to
drop this one when it is mighty close.

 - synchronous replication.  Based on some looking at this today, I am
 somewhat doubtful about the possibility of me or anyone else beating
 this completely into shape in time for 9.2, unless we choose to extend
 the deadline by several weeks.  Simon said that he would have time to
 finish this in the next two weeks, but, as noted, the CommitFest is
 scheduled to be over in ONE week, and it looks to me like this is
 still pretty rough.  However, there's a lot of good stuff in here, and
 I think it might be practical to get some portion of it committed even
 if we can't agree on all of it.  I recommend we give this one a little
 more time to shake out before giving up on it.

 It really would be nice to have this, but I agree that it's pretty late
 in the game for it to be in the state is appears to be in. :/  It also
 seems to have been stalled for the past couple of months, which doesn't
 bode well for it, in my view.

The stall troubles me, and doesn't bode terribly well for 9.1.
-- 
Rules  of the  Evil  Overlord #39.  If  I absolutely  must ride  into
battle, I  will certainly not ride  at the forefront of  my Legions of
Terror, nor will I seek out my opposite number among his army.
http://www.eviloverlord.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread Steve Singer

On 11-02-08 10:07 AM, Jan Urbański wrote:


  * custom SPI exceptions - I'd really like this one to go in, because it
allows writing UPSERT-kind functions in PL/Python very easily, and it's
just a handful of lines of code



I will try to do a review of this one (probably tomorrow night) since 
I've reviewed many of the related patches.



  * don't remove arguments - a bugfix, really, and a very small one

So from the above, I'd say custom datatype parsers could get rejected if
noone feels like having a discussion about it for 9.1. Table functions,
custom SPI exceptions and tracebacks are niceties that if postponed to
9.2 will just mean that many features less in 9.1. The rest is bordering
on bugfixes, and I think should go in.

Cheers,
Jan




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: RangeTypes

2011-02-08 Thread Jeff Davis
On Mon, 2011-02-07 at 20:32 +0200, Peter Eisentraut wrote:
 Have you considered a grammar approach like for arrays, so that you
 would write something like
 
 CREATE TABLE ... (
 foo RANGE OF int
 );
 
 instead of explicitly creating a range type for every scalar type in
 existence?  I think that that might be easier to use in the common case.

It would be nice, but the type system just isn't powerful enough to
express things like that right now, as far as I can tell.

That works for arrays because every type in PG has a second pg_type
entry for the array type. I don't think we want to do something similar
for range types -- especially if there are alternative range types for a
given base type.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions support for pg_dump, patch v27

2011-02-08 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:

 Dimitri Fontaine dimi...@2ndquadrant.fr writes:
 I've spotted a comment that I think you missed updating.  The schema
 given in the control file is now created in all cases rather than only
 when the extension is not relocatable, right?

 Hm, no, that logic is the same as before no?

Well I had

if (!control-relocatable  control-schema != NULL)

And you have 

+   else if (control-schema != NULL)

So you're considering the schema option independently of the relocatable
option, which is ok because you changed those options defaults and
conflicts in the control file parsing.  Only the comment needs adjusting.

 I removed it --- it wasn't documented and I didn't see much point anyway
 in a \d command that just duplicates a system view, especially a view
 only usable by superusers.

It used to not be a straight select from system view, and I wanted to
offer something as simple as clicking a check box in pgadmin for people
who want to see what's available and install it.  But well, the system
view is good enough now I guess, we're talking about DBA here after all.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add ENCODING option to COPY

2011-02-08 Thread Hitoshi Harada
2011/2/9 Tom Lane t...@sss.pgh.pa.us:
 Peter Eisentraut pete...@gmx.net writes:
 On fre, 2011-02-04 at 10:47 -0500, Tom Lane wrote:
 The reason that we use quotes in CREATE DATABASE is that encoding
 names aren't assumed to be valid SQL identifiers.  If this patch isn't
 following the CREATE DATABASE precedent, it's the patch that's wrong,
 not CREATE DATABASE.

 Since encoding names are built-in and therefore well known, and the
 names have been aligned with the SQL standard names, which are
 identifiers, I don't think this argument is valid (anymore).

 What about UTF-8?

Then, quote it?

db1=# set client_encoding to utf-8;
ERROR:  syntax error at or near -

Regards,


-- 
Hitoshi Harada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-02-08 Thread Noah Misch
On Tue, Feb 08, 2011 at 10:24:03AM -0500, Robert Haas wrote:
 Well, Pavel's subsequent reply suggested that he didn't test exactly
 this thing, so maybe there's hope.

No hope on that basis, no.

 Or maybe not.  If Tom thought one branch inside exec_eval_datum() was
 going to be too expensive, four isn't going to be better.

He was commenting on a proposal equivalent to yours.  You might want to reread
this thread in its entirety; we're coming full circle.

 But I think we're out of time to work on this for this cycle.  Even if
 my latest idea is brilliant (and it may not be), we still have to test
 it in a variety of cases and get consensus on it, which seems like
 more than we can manage right now.  I think it's time to mark this one
 Returned with Feedback, or perhaps Rejected would be more accurate in
 this instance.

It's not as if this patch raised complex questions that folks need more time to
digest.  For a patch this small and simple, we minimally owe Pavel a direct
answer about its rejection.

Thanks,
nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Extensions versus pg_upgrade

2011-02-08 Thread Tom Lane
It just occurred to me that the extensions patch thoroughly breaks
pg_upgrade, because pg_upgrade imagines that it can control the specific
OIDs assigned to certain SQL objects such as data types.  That is of
course not gonna happen for objects within an extension.  In fact, since
part of the point of an extension is that the set of objects it packages
might change across versions, it wouldn't be sensible to try to force
OID assignments even if there were a practical way to do it.

Offhand the only escape route that I can see is for a binary upgrade to
not try to install a newer version of an extension, but just duplicate
all the contained objects exactly.  Which is probably less than trivial
to do --- we could get part of the way there by having pg_dump not
suppress the member objects in --binary-upgrade mode, but then we need
some awful kluge to re-establish their pg_depend links to the extension.
In any case that would ratchet the priority of ALTER EXTENSION UPGRADE
back up to a must-have-for-9.1, since pg_upgrade would then leave you
with a non-upgraded extension.

Now what?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread Robert Haas
On Tue, Feb 8, 2011 at 10:40 AM, Chris Browne cbbro...@acm.org wrote:
 sfr...@snowman.net (Stephen Frost) writes:
 * Robert Haas (robertmh...@gmail.com) wrote:
 - Range Types.  This is a large patch which was submitted for the
 first time to the last CommitFest of the cycle, and the first version
 that had no open TODO items was posted yesterday, three-quarters of
 the way through that last CommitFest.  Some good review has been done.
  While more is probably needed, I think we should feel good about
 what's been accomplished and mark this one Returned with Feedback.

 I don't agree w/ punting Range Types.  Range Types were discussed as
 far back as the 2010 developer meeting, were discussed quite a bit
 again starting in October and throughout the fall, and Jeff has
 regularly been posting updates to it.  Given how thorough Jeff is, my
 feeling is that this patch is more than ready for beta.  My impression
 is also that it's not as invasive or destablizing as the others and
 while it wasn't being posted to the previous commit fests, it was
 clearly being worked on, updated, and improved.

 I generally mirror those thoughts.  Range Types don't seem invasive or
 destabilizing, and the code base has been deployed for quite some time
 as an extension (not quite contrib).  It would be disappointing to
 drop this one when it is mighty close.

It's a 5400 line patch that wasn't completed until the middle of the
current CommitFest.  Nobody has ever submitted a major feature patch
of that size that got done in a single CommitFest, to my recollection,
or even half that size.  Compare Hot Standby or True Serializability,
both of which required basically a full development cycle.  It may be
true that the idea has been kicking around for a long time, but it's
been code-complete for one week.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions versus pg_upgrade

2011-02-08 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 In any case that would ratchet the priority of ALTER EXTENSION UPGRADE
 back up to a must-have-for-9.1, since pg_upgrade would then leave you
 with a non-upgraded extension.

 Now what?

What would be the problem with pg_upgrade acting the same as a
dumpreload cycle as far as extensions are concerned?  After all those
can be considered as part of the schema, not part of the data, and the
system catalogs are upgraded by the tool.

It would then only break user objects that depend on the extension's
objects OIDs, but that would be the same if they instead recorded the
OID of catalog entries, right?

So a valid answer for me would be that when you pg_upgrade, the
extensions are installed again from their scripts.  If you want to go
further than that, you can insist on having the same version of the
extension on both sides, but that would defeat the purpose of the tool
somehow.  After all you asked for an upgrade…

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions support for pg_dump, patch v27

2011-02-08 Thread Tom Lane
Dimitri Fontaine dimi...@2ndquadrant.fr writes:
 Tom Lane t...@sss.pgh.pa.us writes:
 Hm, no, that logic is the same as before no?

 Well I had

   if (!control-relocatable  control-schema != NULL)

 And you have 

 + else if (control-schema != NULL)

Yeah, I deleted that relocatable test because it's redundant:
control-schema cannot be set for a relocatable extension,
cf the test in read_extension_control_file.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types

2011-02-08 Thread Jeff Davis
On Mon, 2011-02-07 at 18:23 +0100, Dimitri Fontaine wrote:
 I would think
 
   CREATE TYPE foo AS RANGE (bar) USING (btree_ops);
 
 The USING clause is optional, because you generally have a default btree
 opclass for the datatype.

There are other options, like CANONICAL, so where do those fit?

If CREATE TYPE already has an options list, it seems a little strange to
add grammar to support this feature. USING doesn't seem to mean a lot,
except that we happen to use it in other contexts to mean operator
class.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Named restore points

2011-02-08 Thread Euler Taveira de Oliveira

Em 08-02-2011 11:05, Simon Riggs escreveu:

On Fri, 2011-02-04 at 21:15 -0500, Jaime Casanova wrote:


+   else if (recoveryTarget == RECOVERY_TARGET_NAME)
+   snprintf(buffer, sizeof(buffer),
+%s%u\t%s\t%s named restore point %

s\n,

+(srcfd  0) ?  : \n,
+parentTLI,
+xlogfname,
+recoveryStopAfter ? after :

before,

+recoveryStopNamedRestorePoint);

It doesn't matter if it is after or before the restore point.

After/Before

only make sense when we're dealing with transaction or time.

Removed.




you're right


Not sure I understand the comment only make sense when we're dealing
with transaction or time.  Why?

Because named restore point is a noop xlog record; besides, transaction and 
time involves xlog records that contain data.



--
  Euler Taveira de Oliveira
  http://www.timbira.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Named restore points

2011-02-08 Thread Simon Riggs
On Tue, 2011-02-08 at 14:07 -0300, Euler Taveira de Oliveira wrote:

  Not sure I understand the comment only make sense when we're dealing
  with transaction or time.  Why?
 
 Because named restore point is a noop xlog record; besides, transaction and 
 time involves xlog records that contain data.

Thank you. How obvious!

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-02-08 Thread Pavel Stehule
2011/2/8 Noah Misch n...@leadboat.com:
 On Tue, Feb 08, 2011 at 10:24:03AM -0500, Robert Haas wrote:
 Well, Pavel's subsequent reply suggested that he didn't test exactly
 this thing, so maybe there's hope.

 No hope on that basis, no.

 Or maybe not.  If Tom thought one branch inside exec_eval_datum() was
 going to be too expensive, four isn't going to be better.

 He was commenting on a proposal equivalent to yours.  You might want to reread
 this thread in its entirety; we're coming full circle.

 But I think we're out of time to work on this for this cycle.  Even if
 my latest idea is brilliant (and it may not be), we still have to test
 it in a variety of cases and get consensus on it, which seems like
 more than we can manage right now.  I think it's time to mark this one
 Returned with Feedback, or perhaps Rejected would be more accurate in
 this instance.

 It's not as if this patch raised complex questions that folks need more time 
 to
 digest.  For a patch this small and simple, we minimally owe Pavel a direct
 answer about its rejection.


It's not necessary. But we did not go one step forward :(. The same
situation can be repeated next year.

so I would to see a Robert's patch, if it is possible. And can be nice
if this simple issue can be done with this commitfest.

I can do more performance test of my initial patch. Maybe this patch
isn't nice, but I am sure, so this version has a minimal negative
impact and maximal positive impact.

We speaking about twenty lines that can removed when people will
report a potential problems, so any variant - mine or Tom's can be
simply removed. It doesn't modify behave of executions, it doesn't
append a new feature. It just remove a one pathologic (+/-) issue.

I can live without this patch. I know workaround and know a symptoms.
It's pity so there isn't any PostGIS user (in this discus). For these
people it can be a implicit detoasting interesting and can help with
real tests.

Regards

Pavel




 Thanks,
 nm


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow pg_archivecleanup to ignore extensions

2011-02-08 Thread Euler Taveira de Oliveira

Em 08-02-2011 04:57, Greg Smith escreveu:

We recenty got some on-list griping that pg_standby doesn't handle
archive files that are compressed, either. Given how the job I'm working
on this week is going, I'll probably have to add that feature next.
That's actually an easier source code hack than this one, because of how
the pg_standby code modularizes the concept of a restore command.

This was already proposed a few years ago [1]. I have used a modified 
pg_standby with this feature for a year or so.



[1] 
http://archives.postgresql.org/message-id/e4ccc24e0810222010p12bae2f4xa3a11cb2bc51bd89%40mail.gmail.com



--
  Euler Taveira de Oliveira
  http://www.timbira.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions versus pg_upgrade

2011-02-08 Thread Tom Lane
Dimitri Fontaine dimi...@2ndquadrant.fr writes:
 Tom Lane t...@sss.pgh.pa.us writes:
 Now what?

 What would be the problem with pg_upgrade acting the same as a
 dumpreload cycle as far as extensions are concerned?

Um, how about it doesn't work?

The reason that data type OIDs have to be preserved, for example, is
that they might be out on disk.  Suppose that you have the cube
extension installed and there are some user tables containing columns of
type cube[].  Those arrays are going to have type cube's OID embedded in
them.  If cube has a different OID after pg_upgrade then the arrays are
broken.

Even letting an extension script run and create data types that weren't
there at all before is problematic, since those types could easily claim
OIDs that need to be reserved for pre-existing types that appear later
in the dump script.

Similar issues arise for the other cases where pg_upgrade is forcing OID
assignments; it's not doing that just for fun.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] arrays as pl/perl input arguments [PATCH]

2011-02-08 Thread Alex Hunsaker
On Tue, Feb 8, 2011 at 08:18, Alexey Klyukin al...@commandprompt.com wrote:

 On Feb 6, 2011, at 9:43 AM, Alex Hunsaker wrote:

 So here is a v6
 
 Comments?

 Thanks, looks great to me. It passes all the tests on my OS X system. I wonder
 what's the purpose of the amagic_call in get_perl_array_ref, instead of
 calling newRV_noinc on the target SV * ?

Well, you can't AV *av = (AV *)SvRV(sv); And the SV * amagic_call
returns is already a reference, so the newRV_noinc() would be
redundant no? It occurs to me instead of doing the amagic_call we
could just call the to_array method directly using perl_call_pv().
That would look more normal and less magic-- thats probably a good
thing?


 Also, in array_to_datum (line 1050), if get_perl_array_ref returns NULL for
 the first element of the corresponding dimension, but non-NULL for the second
 one - it would use uninitialized dims[cur_depth] value in comparison (which,
 although, would probably lead to the desired comparison result).

Good catch, will fix. All of those checks should be outside of the while loop.

While Im at it Ill also rebase against HEAD (im sure there will be
some conflicts with that other plperl patch that just went in ;)).

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions versus pg_upgrade

2011-02-08 Thread Robert Haas
On Tue, Feb 8, 2011 at 11:53 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 It just occurred to me that the extensions patch thoroughly breaks
 pg_upgrade, because pg_upgrade imagines that it can control the specific
 OIDs assigned to certain SQL objects such as data types.  That is of
 course not gonna happen for objects within an extension.  In fact, since
 part of the point of an extension is that the set of objects it packages
 might change across versions, it wouldn't be sensible to try to force
 OID assignments even if there were a practical way to do it.

 Offhand the only escape route that I can see is for a binary upgrade to
 not try to install a newer version of an extension, but just duplicate
 all the contained objects exactly.  Which is probably less than trivial
 to do --- we could get part of the way there by having pg_dump not
 suppress the member objects in --binary-upgrade mode, but then we need
 some awful kluge to re-establish their pg_depend links to the extension.
 In any case that would ratchet the priority of ALTER EXTENSION UPGRADE
 back up to a must-have-for-9.1, since pg_upgrade would then leave you
 with a non-upgraded extension.

If we're going to commit any part of this for 9.1, it doesn't seem
like there's much choice but to insert the above-mentioned awful
kludge.  It's certainly not going to work if we fail to preserve the
OIDs in cases where pg_upgrade otherwise thinks it's necessary.

I guess I'm sort of coming to the conclusion that ALTER EXTENSION ..
UPGRADE is pretty much a must-have for a useful feature regardless of
this issue.  I had previously thought that we might be able to limp
along with half a feature for one release - if you're not actually
depending on anything in the extension, you could always drop it and
the install the new version.  But the more I think about it, the less
realistic that sounds; dependencies on the extension are going to be
very common, and while it may be practical to drop and recreate the
dependent objects in some cases, it's certainly going to annoy a lot
of people.

So I think the choices, realistically, are boot both halves of this to
9.2, or do it all now.  I don't really have an opinion on which way to
go with it.  We still have more than 40 patches to deal with for this
CommitFest, and certainly booting this would free up a bunch of your
time to go work on other things.  On the other hand, if you are fired
up about getting this done, maybe it makes sense to get it knocked out
while it's fresh in your mind instead of letting it linger for another
six months.  It's clearly a good feature, and I know that Dimitri has
put a lot of work into getting it this far.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions versus pg_upgrade

2011-02-08 Thread David E. Wheeler
On Feb 8, 2011, at 9:36 AM, Robert Haas wrote:

 I guess I'm sort of coming to the conclusion that ALTER EXTENSION ..
 UPGRADE is pretty much a must-have for a useful feature regardless of
 this issue.  I had previously thought that we might be able to limp
 along with half a feature for one release - if you're not actually
 depending on anything in the extension, you could always drop it and
 the install the new version.  But the more I think about it, the less
 realistic that sounds; dependencies on the extension are going to be
 very common, and while it may be practical to drop and recreate the
 dependent objects in some cases, it's certainly going to annoy a lot
 of people.

I was just thinking about the upgrade issue, and was wondering if this was 
do-able:

* No upgrade scripts. Nothing to concatenate, include, or maintain.
* No version tracking

Yeah, what I'm saying is, throw out the whole thing. Instead, what would happen 
is

ALTER EXTENSION foo UPGRADE;

Would do some trickery behind the scenes to just delete all those objects that 
are part of the extension (just like DROP EXTENSION should do), and then run 
the installation SQL script. So the objects would be exactly as specified in 
the installation script, with no need for the extension maintainer to worry 
about how to run upgrades, and no need for PostgreSQL to track version numbers.

Is this do-able? I recognize that there could be some issues with settings 
tables and what-not, but surely that's no different than for dump and reload. 
The question in my mind is whether it would even be possible for the extension 
code to unload every bit of an extension and load the new stuff, inside a 
transaction.

Thoughts?

Best,

David
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql

2011-02-08 Thread Robert Haas
On Tue, Feb 8, 2011 at 11:38 AM, Noah Misch n...@leadboat.com wrote:
 It's not as if this patch raised complex questions that folks need more time 
 to
 digest.  For a patch this small and simple, we minimally owe Pavel a direct
 answer about its rejection.

Well, I don't see how we can give a totally straightforward answer at
this point.  There are several proposals on the table, and they have
different pros and cons.  Nobody is completely happy with any of them,
AFAICT.  I think as far as the original patch goes, it's rejected.  Is
there a variant of that approach that gives the same benefit with
better style?  I don't know.  I might be able to figure something out
if I spent an afternoon on it, but why is that my job?

There is sometimes a perception among non-committers that committers
are hiding the ball, as if the criteria for patch acceptance were
purely arbitrary and we make people guess what we want and then
complain when we don't get it.  I've even had that perception myself a
time or two, but I try hard not to do that and I think (hope) that
other committers do as well.  I've had my own share of ideas that I
thought were good and then had to abandon them either because they had
some deficiency which someone pointed out to me and I had to give up,
or else because I couldn't get consensus that the new behavior was
better than the old, even though it emphatically seemed so to me.
I've also had plenty of ideas that got shot down multiple times before
finally being accepted.I can't, and don't, accept that there isn't
some way to improve the repeated detoasting situation, but I do not
know what the best solution is technically.  I don't even have an
opinion, without a lot more work.  And I certainly don't have the
ability to know what Tom or someone else will think about the code
that that solution requires.  The only thing I think IS clear is that
despite three weeks of discussion, we have no consensus on any of the
proposed patches, nor is there any clear path to reaching a consensus.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread David Fetter
On Mon, Feb 07, 2011 at 10:37:06PM -0500, Robert Haas wrote:
 On Mon, Feb 7, 2011 at 3:57 PM, Chris Browne cbbro...@acm.org wrote:
  It sure looks to me like there are going to be a bunch of items that,
  based on the recognized policies, need to get deferred to 9.2, and the
  prospects for Sync Rep getting into 9.1 don't look notably good to me.
 
  It's definitely readily arguable that fairness requires that:
 
   - Items not committable by 2011-02-15 be deferred to the 2011-Next fest
 
    There are around 25 items right now that are sitting with [Waiting
    for Author] and [Returned with Feedback] statuses.  They largely seem
    like pretty fair game for next fest.
 
   - Large items that weren't included in the 2010-11 fest be considered
    problematic to try to integrate into 9.1
 
    There sure seem to be some large items in the 2011-01 fest, which I
    thought wasn't supposed to be the case.
 
 This discussion reveals that it's time to start making some
 discussions about what can be accomplished for 9.1 and what must be
 postponed to 9.2.

Given how things worked, i.e. that people were not clear that 9.1
development had actually started, etc., I am again proposing that we
have one more CF starting March 15 to get this all cleaned up.  Yes, I
know that wasn't the plan, but I also know that we're really, really
close on a whole bunch of things, some of which have been in the
offing for years at this point, and we risk giving people the
impression, if they don't already have it, that if they're not in the
inner circle, their patches get lower priority no matter what their
merits.

 The big ones I think we should postpone are:
 
 - Range Types.  This is a large patch which was submitted for the
 first time to the last CommitFest of the cycle, and the first version
 that had no open TODO items was posted yesterday, three-quarters of
 the way through that last CommitFest.  Some good review has been done.
  While more is probably needed, I think we should feel good about
 what's been accomplished and mark this one Returned with Feedback.

This one's a product differentiator for PostgreSQL.  We can own this
space for at least a year before it gets on any other RDBMS's roadmap.
That the work wasn't submitted until it was in pretty good shape is a
mark in Jeff's favor, not a reason to punt for another year.

 - ALTER EXTENSION UPGRADE.  This is another large patch which was
 submitted for the first time to the last CommitFest of the cycle.
 The prerequisite patch to provide basic extension support has not
 been committed yet, although it sounds like that will happen soon.
 However, since that patch is undergoing a fair amount of surgery,
 it's reasonably certain that this will require significant rebasing.
 I think it would also be an overstatement to say that we have
 consensus on the design.  My feeling is that, unless Tom thinks that
 failing to get this committed now is going to leave us with a mess
 later, we should mark this one Returned with Feedback and revisit it
 for 9.2.

If we're going to leave this out, we should probably pull EXTENSIONs
out entirely.  Is that really where we want to go?

 - FOR KEY LOCK tables.  This patch, unfortunately, has not gotten a
 lot of review.  But it's a major, potentially destabilizing change
 that was, like the last two, submitted for the first time to the last
 CommitFest of the cycle.  Even if we assume for the sake of argument
 that the code is unusually good for a feature of this type, I don't
 think it's the right time to commit something like this.  I would
 argue for putting this off until 9.2, though preferably with a bit
 more review than it's gotten so far.
 
 The other remaining big patches are:
 
 - extension support for pg_upgrade.  Tom is planning to have this
 committed within a day or so, per latest news.

See above.

 - synchronous replication.  Based on some looking at this today, I am
 somewhat doubtful about the possibility of me or anyone else beating
 this completely into shape in time for 9.2, unless we choose to extend
 the deadline by several weeks.

+1 for extending that deadline.  This is another product
differentiator, and we can have it as that for about a year if we make
sure it gets into 9.1.

 Simon said that he would have time to
 finish this in the next two weeks, but, as noted, the CommitFest is
 scheduled to be over in ONE week, and it looks to me like this is
 still pretty rough.  However, there's a lot of good stuff in here, and
 I think it might be practical to get some portion of it committed even
 if we can't agree on all of it.  I recommend we give this one a little
 more time to shake out before giving up on it.
 
 - SQL/MED.  This patch unfortunately kind of stalled for a long time.
 However, I believe that Heikki is now working actively on the core
 patch, so I'm hopeful for some activity here soon.
 
 - Writeable CTEs.  Tom has promised to look at this, but doesn't seem
 to be in a hurry.

This patch is about 

Re: [HACKERS] Extensions versus pg_upgrade

2011-02-08 Thread Robert Haas
On Tue, Feb 8, 2011 at 12:57 PM, David E. Wheeler da...@kineticode.com wrote:
 On Feb 8, 2011, at 9:36 AM, Robert Haas wrote:

 I guess I'm sort of coming to the conclusion that ALTER EXTENSION ..
 UPGRADE is pretty much a must-have for a useful feature regardless of
 this issue.  I had previously thought that we might be able to limp
 along with half a feature for one release - if you're not actually
 depending on anything in the extension, you could always drop it and
 the install the new version.  But the more I think about it, the less
 realistic that sounds; dependencies on the extension are going to be
 very common, and while it may be practical to drop and recreate the
 dependent objects in some cases, it's certainly going to annoy a lot
 of people.

 I was just thinking about the upgrade issue, and was wondering if this was 
 do-able:

 * No upgrade scripts. Nothing to concatenate, include, or maintain.
 * No version tracking

 Yeah, what I'm saying is, throw out the whole thing. Instead, what would 
 happen is

    ALTER EXTENSION foo UPGRADE;

 Would do some trickery behind the scenes to just delete all those objects 
 that are part of the extension (just like DROP EXTENSION should do), and then 
 run the installation SQL script. So the objects would be exactly as specified 
 in the installation script, with no need for the extension maintainer to 
 worry about how to run upgrades, and no need for PostgreSQL to track version 
 numbers.

 Is this do-able? I recognize that there could be some issues with settings 
 tables and what-not, but surely that's no different than for dump and reload. 
 The question in my mind is whether it would even be possible for the 
 extension code to unload every bit of an extension and load the new stuff, 
 inside a transaction.

No, this is not doable, or at least not in a way that provides any
benefit over just dropping and reinstalling.  The problem is that it
is going to fall down all over the place if other objects are
depending on objects provided by the extension.  Like:

CREATE VIEW v AS SELECT extensionfunc(1);

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions versus pg_upgrade

2011-02-08 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 I guess I'm sort of coming to the conclusion that ALTER EXTENSION ..
 UPGRADE is pretty much a must-have for a useful feature regardless of
 this issue.  I had previously thought that we might be able to limp
 along with half a feature for one release - if you're not actually
 depending on anything in the extension, you could always drop it and
 the install the new version.  But the more I think about it, the less
 realistic that sounds; dependencies on the extension are going to be
 very common, and while it may be practical to drop and recreate the
 dependent objects in some cases, it's certainly going to annoy a lot
 of people.

 So I think the choices, realistically, are boot both halves of this to
 9.2, or do it all now.  I don't really have an opinion on which way to
 go with it.  We still have more than 40 patches to deal with for this
 CommitFest, and certainly booting this would free up a bunch of your
 time to go work on other things.  On the other hand, if you are fired
 up about getting this done, maybe it makes sense to get it knocked out
 while it's fresh in your mind instead of letting it linger for another
 six months.  It's clearly a good feature, and I know that Dimitri has
 put a lot of work into getting it this far.

Personally I find the extension stuff to be a bigger deal than anything
else remaining in the commitfest.  Also, I've fixed a number of
pre-existing bugs in passing, and I'd have to go extract those changes
out of the current extensions patch if we abandon it now.  So I'd like
to keep pushing ahead on this.

After further reflection I think that the pg_upgrade situation can be
handled like this:

1. Provide a way for CREATE EXTENSION to not run the script --- either
add a grammar option for that, or just have a back-door flag that
pg_upgrade can set via one of its special kluge functions.  (Hm,
actually we'd want it to install the old version number and comment
too, so maybe just have a kluge function to create a pg_extension
entry that agrees with the old entry.)

2. Invent a command ALTER EXTENSION extname ADD object-description
that simply adds a pg_depend entry marking an existing object as
belonging to the extension.  I think this was going to be part of the
plan anyway for ALTER EXTENSION UPGRADE, specifically we need that for
the bootstrap case of collecting some loose pre-existing objects into
an extension.

3. In --binary-upgrade mode, pg_dump doesn't suppress the individual
member objects, but instead emits ALTER EXTENSION ADD after creating
each member object.

So that gets us to the point of being able to run pg_upgrade without
changing the contents of the extension.  After that we can look at
ALTER EXTENSION UPGRADE.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions versus pg_upgrade

2011-02-08 Thread David E. Wheeler
On Feb 8, 2011, at 10:03 AM, Robert Haas wrote:

 No, this is not doable, or at least not in a way that provides any
 benefit over just dropping and reinstalling.  The problem is that it
 is going to fall down all over the place if other objects are
 depending on objects provided by the extension.  Like:
 
 CREATE VIEW v AS SELECT extensionfunc(1);

Ah, right, of course. I don't suppose CREATE OR REPLACE would work, either, in 
some cases at least?

Best,

David
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions versus pg_upgrade

2011-02-08 Thread Tom Lane
David E. Wheeler da...@kineticode.com writes:
 On Feb 8, 2011, at 10:03 AM, Robert Haas wrote:
 No, this is not doable, or at least not in a way that provides any
 benefit over just dropping and reinstalling.  The problem is that it
 is going to fall down all over the place if other objects are
 depending on objects provided by the extension.  Like:
 
 CREATE VIEW v AS SELECT extensionfunc(1);

 Ah, right, of course. I don't suppose CREATE OR REPLACE would work, either, 
 in some cases at least?

But figuring out just what commands to issue is pretty nearly AI-complete.
The whole point of ALTER EXTENSION UPGRADE is to have a human do that
and then give you a script to run.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions versus pg_upgrade

2011-02-08 Thread David E. Wheeler
On Feb 8, 2011, at 10:24 AM, Tom Lane wrote:

 Ah, right, of course. I don't suppose CREATE OR REPLACE would work, either, 
 in some cases at least?
 
 But figuring out just what commands to issue is pretty nearly AI-complete.
 The whole point of ALTER EXTENSION UPGRADE is to have a human do that
 and then give you a script to run.

Damn. Okay, was just trying to think outside the box a bit here.

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] arrays as pl/perl input arguments [PATCH]

2011-02-08 Thread Alex Hunsaker
On Tue, Feb 8, 2011 at 10:33, Alex Hunsaker bada...@gmail.com wrote:
 On Tue, Feb 8, 2011 at 08:18, Alexey Klyukin al...@commandprompt.com wrote:
 Thanks, looks great to me. It passes all the tests on my OS X system. I 
 wonder
 what's the purpose of the amagic_call in get_perl_array_ref, instead of
 calling newRV_noinc on the target SV * ?

 Well, you can't AV *av = (AV *)SvRV(sv); And the SV * amagic_call
 returns is already a reference, so the newRV_noinc() would be
 redundant no? It occurs to me instead of doing the amagic_call we
 could just call the to_array method directly using perl_call_pv().
 That would look more normal and less magic-- thats probably a good
 thing?

Err, even simpler would be to just access the 'array' member directly
out of the hash object.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-08 Thread Dan Ports
On Tue, Feb 08, 2011 at 10:14:44AM -0600, Kevin Grittner wrote:
 I do have some concern that if this defaults to too low a number,
 those who try SSI without bumping it and restarting the postmaster
 will not like the performance under load very much.  SSI performance
 would not be affected by a low setting under light load when there
 isn't a long-running READ WRITE transaction.

If we're worried about this, we could add a log message the first time
SummarizeOldestCommittedXact is called, to suggest increasing the GUC
for number of SerializableXacts. This also has the potential benefit of
alerting the user that there's a long-running transaction, in case that's
unexpected (say, if it were caused by a wedged client)

I don't have any particular opinion on what the default value of the
GUC should be.

Dan

-- 
Dan R. K. Ports  MIT CSAILhttp://drkp.net/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] MVCC doc typo fix

2011-02-08 Thread Kevin Grittner
attached
 
-Kevin

*** a/doc/src/sgml/mvcc.sgml
--- b/doc/src/sgml/mvcc.sgml
***
*** 670,676  ERROR:  could not serialize access due to read/write 
dependencies among transact
   permanent database writes within Serializable transactions on the
   master will ensure that all standbys will eventually reach a consistent
   state, a Repeatable Read transaction run on the standby can sometimes
!  see a transient state which in inconsistent with any serial execution
   of serializable transactions on the master.
  /para
 /warning
--- 670,676 
   permanent database writes within Serializable transactions on the
   master will ensure that all standbys will eventually reach a consistent
   state, a Repeatable Read transaction run on the standby can sometimes
!  see a transient state which is inconsistent with any serial execution
   of serializable transactions on the master.
  /para
 /warning

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread Peter Eisentraut
On tis, 2011-02-08 at 00:16 -0500, Steve Singer wrote:
 On 11-02-07 10:37 PM, Robert Haas wrote:
 
  - The PL/python extravaganza.  I'm not really clear where we stand
  with this.  There are a lot of patches here.
 
 
 Some of the patches have been committed a few others are ready (or 
 almost ready) for a committer.   The table function one  is the only one 
 in 'waiting for author'
 
 4 of the patches haven't yet received any review. Jan Urbanski has been 
 pretty good about posting updated patches as the dependent patches get 
 updated.  It would be good if a few people grabbed these. The individual 
 patches tend to not be that large.

I'm available to commit these if someone else can do the initial review.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add ENCODING option to COPY

2011-02-08 Thread Peter Eisentraut
On tis, 2011-02-08 at 10:53 -0500, Tom Lane wrote:
 Peter Eisentraut pete...@gmx.net writes:
  On fre, 2011-02-04 at 10:47 -0500, Tom Lane wrote:
  The reason that we use quotes in CREATE DATABASE is that encoding
  names aren't assumed to be valid SQL identifiers.  If this patch isn't
  following the CREATE DATABASE precedent, it's the patch that's wrong,
  not CREATE DATABASE.
 
  Since encoding names are built-in and therefore well known, and the
  names have been aligned with the SQL standard names, which are
  identifiers, I don't think this argument is valid (anymore).
 
 What about UTF-8?

The canonical name of that is UTF8.  But you can quote it if you want to
spell it differently.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL/MED - file_fdw

2011-02-08 Thread Noah Misch
On Tue, Feb 08, 2011 at 09:25:29PM +0900, Itagaki Takahiro wrote:
 Here is a revised patch, that including jagged csv support.
 A new exported function is named as NextCopyFromRawFields.

Seems a bit incongruous to handle the OID column in that function.  That part
fits with the other per-column code in NextCopyFrom().

 On Mon, Feb 7, 2011 at 21:16, Noah Misch n...@leadboat.com wrote:
  Perhaps I'm missing something. ??The new API does not expose a processed 
  count
  at all; that count is used for the command tag of a top-level COPY. ??This 
  part
  of the patch is just changing how we structure the code to maintain that 
  tally,
  and it has no implications for observers outside copy.c. ??Right?
 
 True, but the counter is tightly bound with INSERT-side of COPY FROM.
 | copy.c (1978)
 |  * We count only tuples not suppressed by a BEFORE INSERT trigger;
 
 I think it is cleaner way to separate it from CopyState
 because it is used as a file reader rather than a writer.
 However, if there are objections, I'd revert it.

No significant objection.  I just wished to be clear on whether it was pure
refactoring, or something more.

  ExecEvalExpr(), used to acquire default values, will still use the
  per-tuple context of CopyState-estate. ??That per-tuple context will never 
  get
  reset explicitly, so default value computations leak until EndCopyFrom().
 
 Ah, I see. I didn't see the leak because we rarely use per-tuple memory
 context in the estate. We just use CurrentMemoryContext in many cases.
 But the leak could occur, and the code is misleading.
 I moved ResetPerTupleExprContext() into NextCopyFrom(), but
 CurrentMemoryContext still used in it to the result values.

The code still violates the contract of ExecEvalExpr() by calling it with
CurrentMemoryContext != econtext-ecxt_per_tuple_memory.

 Another possible design might be passing EState as an argument of
 NextCopyFrom and remove estate from CopyState. It seems a much cleaner way
 in terms of control flow, but it requires more changes in file_fdw.
 Comments?

Seems sensible and more-consistent with the typical structure of executor code.
Do we place any constraints on sharing of EState among different layers like
this?  I could not identify any, offhand.

The new API uses EState for two things.  First, BeginCopyFrom() passes it to
ExecPrepareExpr().  Instead, let's use expression_planner() + ExecInitExpr() and
require that we've been called with a memory context of suitable longevity.
Things will break anyway if BeginCopyFrom()'s CurrentMemoryContext gets reset
too early.  This way, we no longer need an EState in BeginCopyFrom().

Second, NextCopyFrom() sends the per-output-tuple ExprContext of the EState to
ExecEvalExpr().  It really needs a specific ExprContext, not an EState.  A
top-level COPY has a bijection between input and output tuples, making the
distinction unimportant.  GetPerTupleExprContext() is wrong for a file_fdw scan,
though.  We need the ExprContext of the ForeignScanState or another of
equivalent lifecycle.  NextCopyFrom() would then require that it be called with
CurrentMemoryContext == econtext-ecxt_per_tuple_memory.

Sorry, I missed a lot of these memory details on my first couple of reviews.

nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Scheduled maintenance affecting gitmaster

2011-02-08 Thread Magnus Hagander
Hi!

The PostgreSQL Infrastructure Team will be performing scheduled
maintenance on the server that is hosting gitmaster.postgresql.org the
coming sunday, Feb 13th. While we expect this to only cause a very
short downtime for the service, one can never be entirely sure with
remote maintenance.. For this reason, please avoid using the service
during this time. Note that only the git *master* server is affected,
not the git.postgresql.org anonymous mirror is on another host and
will not be affected.

We will post a note here when the maintenance has been completed.

Date: 2011-02-13
Start: 12:00 UTC
End: ~13:00 UTC, but may drag out
Affected services: gitmaster.postgresql.org and misc. internal services

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-08 Thread Robert Haas
On Mon, Feb 7, 2011 at 1:20 PM, Robert Haas robertmh...@gmail.com wrote:
 On Sat, Jan 15, 2011 at 4:40 PM, Simon Riggs si...@2ndquadrant.com wrote:
 Here's the latest patch for sync rep.

 Here is a rebased version of this patch which applies to head of the
 master branch.  I haven't tested it yet beyond making sure that it
 compiles and passes the regression tests -- but this fixes the bitrot.

As I mentioned yesterday that I would do, I spent some time working on
this.  I think that there are somewhere between three and six
independently useful features in this patch, plus a few random changes
to the documentation that I'm not sure whether want or not (e.g.
replacing master by primary in a few places, or the other way around).

One problem with the core synchronous replication technology is that
walreceiver cannot both receive WAL and write WAL at the same time.
It switches back and forth between reading WAL from the network socket
and flushing it to disk.  The impact of that is somewhat mitigated in
the current patch because it only implements the fsync level of
replication, and chances are that the network read time is small
compared to the fsync time.  But it would certainly suck for the
receive level we've talked about having in the past, because after
receiving each batch of WAL, the WAL receiver wouldn't be able to send
any more acknowledgments until the fsync completed, and that's bound
to be slow.  I'm not really sure how bad it will be in fsync mode;
it may be tolerable, but as Simon noted in a comment, in the long run
it'd certainly be nicer to have the WAL writer process running during
recovery.

As a general comment on the quality of the code, I think that the
overall logic is probably sound, but there are an awful lot of
debugging leftovers and inconsistencies between various parts of the
patch.  For example, when I initially tested it, *asynchronous*
replication kept breaking between the master and the standby, and I
couldn't figure out why.  I finally realized that there was a ten
second pause that had been inserting into the WAL receiver loop as a
debugging tool which was allowing the standby to get far enough behind
that the master was able to recycle WAL segments the standby still
needed.  Under ordinary circumstances, I would say that a patch like
this was not mature enough to submit for review, let alone commit.
For that reason, I am pretty doubtful about the chances of getting
this finished for 9.1 without some substantial prolongation of the
schedule.

That having been said, there is at least one part of this patch which
looks to be in pretty good shape and seems independently useful
regardless of what happens to the rest of it, and that is the code
that sends replies from the standby back to the primary.  This allows
pg_stat_replication to display the write/flush/apply log positions on
the standby next to the sent position on the primary, which as far as
I am concerned is pure gold.  Simon had this set up to happen only
when synchronous replication or XID feedback in use, but I think
people are going to want it even with plain old asynchronous
replication, because it provides a FAR easier way to monitor standby
lag than anything we have today.  I've extracted this portion of the
patch, cleaned it up a bit, written docs, and attached it here.

I wasn't too sure how to control the timing of the replies.  It's
worth noting that you have to send them pretty frequently for the
distinction between xlog written and xlog flushed to have any value.
What I've done here is made it so that every time we read all
available data on the socket, we send a reply.  After flushing, we
send another reply.  And then just for the heck of it we send a reply
at least every 10 seconds (configurable), which causes the
last-known-apply position to eventually get updated on the master.
This means the apply position can lag reality by a bit.  Simon's
version adds a latch, so that the startup process can poke the WAL
receiver to send a reply when the apply position moves.  But this is
substantially more complex and I'm not sure it's worth it.  If we were
implementing the apply level of synchronized replication, we'd
clearly need that for performance not to stink.  But since the patch
is only implementing fsync anyway, it doesn't seem necessary for
now.

The only real complaint I can imagine about offering this
functionality all the time is that it uses extra bandwidth.  I'm
inclined to think that the ability to shut it off completely is
sufficient answer to that complaint.

dons asbestos underwear

Comments?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


wal-receiver-replies.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-08 Thread Heikki Linnakangas

On 08.02.2011 18:14, Kevin Grittner wrote:

I wrote:


The multiplier of 10 PredXactList structures per connection is
kind of arbitrary.  It affects the point at which information is
pushed to the lossy summary, so any number from 2 up will work
correctly; it's a matter of performance and false positive rate.
We might want to put that on a GUC and default it to something
lower.


If the consensus is that we want to add this knob, I can code it up
today.  If we default it to something low, we can knock off a large
part of the 2MB increase in shared memory used by SSI in the default
configuration.  For those not using SERIALIZABLE transactions the
only impact is that less shared memory will be reserved for
something they're not using.  For those who try SERIALIZABLE
transactions, the smaller the number, the sooner performance will
start to drop off under load -- especially in the face of a
long-running READ WRITE transaction.  Since it determines shared
memory allocation, it would have to be a restart-required GUC.

I do have some concern that if this defaults to too low a number,
those who try SSI without bumping it and restarting the postmaster
will not like the performance under load very much.  SSI performance
would not be affected by a low setting under light load when there
isn't a long-running READ WRITE transaction.


Hmm, comparing InitPredicateLocks() and PredicateLockShmemSize(), it 
looks like RWConflictPool is missing altogether from the calculations in 
PredicateLockShmemSize().


I added an elog to InitPredicateLocks() and PredicateLockShmemSize(), to 
print the actual and estimated size. Here's what I got with 
max_predicate_locks_per_transaction=10 and max_connections=100:


LOG:  shmemsize 635467
LOG:  actual 1194392
WARNING:  out of shared memory
FATAL:  not enough shared memory for data structure shmInvalBuffer 
(67224 bytes requested)


On the other hand, when I bumped max_predicate_locks_per_transaction to 
100, I got:


LOG:  shmemsize 3153112
LOG:  actual 2339864

Which is a pretty big overestimate, percentage-wise. Taking 
RWConflictPool into account in PredicateLockShmemSize() fixes the 
underestimate, but makes the overestimate correspondingly larger. I've 
never compared the actual and estimated shmem sizes of other parts of 
the backend, so I'm not sure how large discrepancies we usually have, 
but that seems quite big.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread Jeff Davis
On Mon, 2011-02-07 at 22:37 -0500, Robert Haas wrote:
 - Range Types.  This is a large patch which was submitted for the
 first time to the last CommitFest of the cycle, and the first version
 that had no open TODO items was posted yesterday, three-quarters of
 the way through that last CommitFest.  Some good review has been done.
  While more is probably needed, I think we should feel good about
 what's been accomplished and mark this one Returned with Feedback.

I submitted this clearly marked WIP, so I expected that it would likely
be pushed to 9.2.

However, I don't feel like I have the kind of feedback that will help me
get it committed next commitfest. I did get some review, and that was
helpful, but it was mostly on isolated details.

The patch is a million little decisions: names, catalog structure,
interface, representation, general usability, grammar, functionality,
etc. Without some checkpoint, the chances that everyone agrees with all
of these decisions at the beginning of the next commitfest is zero.

Is the commitfest not the right place to do this? If not, then when?

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread Robert Haas
On Tue, Feb 8, 2011 at 1:02 PM, David Fetter da...@fetter.org wrote:
 Given how things worked, i.e. that people were not clear that 9.1
 development had actually started, etc., I am again proposing that we
 have one more CF starting March 15 to get this all cleaned up.  Yes, I
 know that wasn't the plan, but I also know that we're really, really
 close on a whole bunch of things, some of which have been in the
 offing for years at this point, and we risk giving people the
 impression, if they don't already have it, that if they're not in the
 inner circle, their patches get lower priority no matter what their
 merits.

I agree that we have some problems in that area - particularly with
writeable CTEs - but prolonging the schedule isn't going to fix that
problem.

I don't think that's entirely fair to people who planned their work
over the last eight months so that their patches would be committed
before the deadline.  I both worked hard to make sure the stuff I
cared most about got committed in time, and passed over projects that
I could not get done in time, even though I *could* have gotten them
done given another two months.  I realize there are all sorts of good
reasons why people didn't get things done on time, like, say, the need
to earn a living - but having a time frame and sticking with it is
ultimately better for the project, at least in my opinion.  If we have
to slip the end of the CommitFest a little to get a few extra things
in, OK, but adding another two months to the development schedule
that's been published for most of a year is a pretty drastic change.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread Jeff Davis
On Tue, 2011-02-08 at 06:57 -0500, Stephen Frost wrote:
 * Robert Haas (robertmh...@gmail.com) wrote:
  - Range Types.  This is a large patch which was submitted for the
  first time to the last CommitFest of the cycle, and the first version
  that had no open TODO items was posted yesterday, three-quarters of
  the way through that last CommitFest.  Some good review has been done.
   While more is probably needed, I think we should feel good about
  what's been accomplished and mark this one Returned with Feedback.
 
 I don't agree w/ punting Range Types.  Range Types were discussed as far
 back as the 2010 developer meeting, were discussed quite a bit again
 starting in October and throughout the fall, and Jeff has regularly
 been posting updates to it.  Given how thorough Jeff is, my feeling is
 that this patch is more than ready for beta.

I appreciate the sentiment, but in addition to some cleanup, any patch
like this at least requires some discussion. It's a language change
we'll be supporting for a long time.

At minimum, we're a couple hundred emails shy of a real consensus on the
naming ;)

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread Stephen Frost
* Jeff Davis (pg...@j-davis.com) wrote:
 I appreciate the sentiment, but in addition to some cleanup, any patch
 like this at least requires some discussion. It's a language change
 we'll be supporting for a long time.

My feeling was that we have had at least some of that discussion this
past fall...  It's not like you went out and developed this completely
outside of any community contact.  Perhaps it's just not as
controversial as you're expecting. :)

 At minimum, we're a couple hundred emails shy of a real consensus on the
 naming ;)

*smirk.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread David Fetter
On Tue, Feb 08, 2011 at 02:04:04PM -0500, Robert Haas wrote:
 On Tue, Feb 8, 2011 at 1:02 PM, David Fetter da...@fetter.org wrote:
  Given how things worked, i.e. that people were not clear that 9.1
  development had actually started, etc., I am again proposing that we
  have one more CF starting March 15 to get this all cleaned up.  Yes, I
  know that wasn't the plan, but I also know that we're really, really
  close on a whole bunch of things, some of which have been in the
  offing for years at this point, and we risk giving people the
  impression, if they don't already have it, that if they're not in the
  inner circle, their patches get lower priority no matter what their
  merits.
 
 I agree that we have some problems in that area - particularly with
 writeable CTEs - but prolonging the schedule isn't going to fix that
 problem.

What is?

 I don't think that's entirely fair to people who planned their work
 over the last eight months so that their patches would be committed
 before the deadline.  I both worked hard to make sure the stuff I
 cared most about got committed in time, and passed over projects that
 I could not get done in time, even though I *could* have gotten them
 done given another two months.  I realize there are all sorts of good
 reasons why people didn't get things done on time, like, say, the need
 to earn a living - but having a time frame and sticking with it is
 ultimately better for the project, at least in my opinion.  If we have
 to slip the end of the CommitFest a little to get a few extra things
 in, OK, but adding another two months to the development schedule
 that's been published for most of a year is a pretty drastic change.

This development cycle was a change from other development cycles in
that it began before development had closed in the previous cycle.
I will not take a position at the moment as to the wisdom of making
this change, but it's been clear from feedback from lots of
developers, even ones who'd be expected to be in the know, that this
vital piece of information did not gotten out in time.

Let's assume for the moment that we're going with overlapping
development cycles into the future.  I'd submit that given the
propagation delay of this change, the schedule for the first iteration
of this never was reasonable, and slipping two months is a small
price to pay for the huge flock of things we're epsilon short of
getting.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread Jeff Davis
On Tue, 2011-02-08 at 11:56 -0500, Robert Haas wrote:
 It's a 5400 line patch that wasn't completed until the middle of the
 current CommitFest.  Nobody has ever submitted a major feature patch
 of that size that got done in a single CommitFest, to my recollection,
 or even half that size.

My concern is that, aside from code, my patch didn't make much progress
this commitfest. And the code progress was mostly me working through my
own TODO list on things like GiST support -- which didn't represent any
real decisions, it was mostly just a matter of code.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-08 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 
 Taking RWConflictPool into account in PredicateLockShmemSize() fixes
the 
 underestimate, but makes the overestimate correspondingly larger.
I've 
 never compared the actual and estimated shmem sizes of other parts of

 the backend, so I'm not sure how large discrepancies we usually have,

 but that seems quite big.
 
Looking into it...
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread Robert Haas
On Tue, Feb 8, 2011 at 2:02 PM, Jeff Davis pg...@j-davis.com wrote:
 On Mon, 2011-02-07 at 22:37 -0500, Robert Haas wrote:
 - Range Types.  This is a large patch which was submitted for the
 first time to the last CommitFest of the cycle, and the first version
 that had no open TODO items was posted yesterday, three-quarters of
 the way through that last CommitFest.  Some good review has been done.
  While more is probably needed, I think we should feel good about
 what's been accomplished and mark this one Returned with Feedback.

 I submitted this clearly marked WIP, so I expected that it would likely
 be pushed to 9.2.

 However, I don't feel like I have the kind of feedback that will help me
 get it committed next commitfest. I did get some review, and that was
 helpful, but it was mostly on isolated details.

 The patch is a million little decisions: names, catalog structure,
 interface, representation, general usability, grammar, functionality,
 etc. Without some checkpoint, the chances that everyone agrees with all
 of these decisions at the beginning of the next commitfest is zero.

 Is the commitfest not the right place to do this? If not, then when?

That's a fair question, and I do understand the difficulty.  I think a
CommitFest is the right place to do that.  On the other hand, as I'm
sure you realize, I'm not keen to hold up 9.1beta for a feature that
isn't going to be committed until 9.2.  In the 9.0 and 9.1 cycles, it
was my observation that patches which were submitted to the first or
second CommitFest got a lot of this kind of review and went onto be
committed without a problem, usually in the next CommitFest.
Absorbing patches at the end of the cycle is a lot harder, because
everyone who has been thinking about doing something for the release
wakes up and submits it, often half-finished, often at the very last
minute.  Furthermore, it takes more time to review them even if they
ARE code-complete, because it takes more time to do a real
review-to-commit than it does to tell  someone call it a whisker
instead of a potato and delete all the comments about sweet potatoes.

I really don't think that getting this into 9.2 is going to be a
problem.  Given the level of interest in this patch, there will be no
problem at all finding someone to do a detailed review when we're not
quite so pressed for time, and I'm willing to put some time into it
myself when I'm not quite so pressed for time.  Although it doesn't
feel like it at the moment, we have actually made great strides in
absorbing large patches.  We've already absorbed true seriailizability
and SE-Linux integration (cut down basic version) this release cycle,
and it looks like we're going to absorb extensions and hopefully
SQL/MED also.  Those are all big, big pieces of work *by
non-committers*, and the fact that they've sailed in with hardly a
cross word is, to me, a very good sign for the future of our
development community.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-general 2011-1-21:] Are there any projects interested in object functionality? (+ rule bases)

2011-02-08 Thread Peter Eisentraut
On lör, 2011-02-05 at 12:26 +0100, Nick Rudnick wrote:
 o   extensions of PostgreSQL to support such a kind of usage have to
 be expected to be expected to be rejected from integration to the code
 base core -- i.e., if they are done, students have to be told «you
 can't expect this to become a part of PostgreSQL»

There are very varied interests in this community.  But you have to keep
in mind that PostgreSQL is a production software package, not a research
platform.  This doesn't mean that we discourage research with the
PostgreSQL code base.  But that should be done in a fork.  Code aimed
for inclusion in a proper PostgreSQL release should have demonstrable
real life use and should be developed according to certain standards and
in close corporation with the community.  If a research project can deal
with that, great, but typically, research projects are there to
determine whether something could have real-life value in the first
place, and development follows different standards of quality.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-08 Thread Magnus Hagander
On Tue, Feb 8, 2011 at 19:53, Robert Haas robertmh...@gmail.com wrote:
 On Mon, Feb 7, 2011 at 1:20 PM, Robert Haas robertmh...@gmail.com wrote:
 On Sat, Jan 15, 2011 at 4:40 PM, Simon Riggs si...@2ndquadrant.com wrote:
 Here's the latest patch for sync rep.

 Here is a rebased version of this patch which applies to head of the
 master branch.  I haven't tested it yet beyond making sure that it
 compiles and passes the regression tests -- but this fixes the bitrot.

 As I mentioned yesterday that I would do, I spent some time working on
 this.  I think that there are somewhere between three and six
 independently useful features in this patch, plus a few random changes
 to the documentation that I'm not sure whether want or not (e.g.
 replacing master by primary in a few places, or the other way around).

 One problem with the core synchronous replication technology is that
 walreceiver cannot both receive WAL and write WAL at the same time.
 It switches back and forth between reading WAL from the network socket
 and flushing it to disk.  The impact of that is somewhat mitigated in
 the current patch because it only implements the fsync level of
 replication, and chances are that the network read time is small
 compared to the fsync time.  But it would certainly suck for the
 receive level we've talked about having in the past, because after
 receiving each batch of WAL, the WAL receiver wouldn't be able to send
 any more acknowledgments until the fsync completed, and that's bound
 to be slow.  I'm not really sure how bad it will be in fsync mode;
 it may be tolerable, but as Simon noted in a comment, in the long run
 it'd certainly be nicer to have the WAL writer process running during
 recovery.

 As a general comment on the quality of the code, I think that the
 overall logic is probably sound, but there are an awful lot of
 debugging leftovers and inconsistencies between various parts of the
 patch.  For example, when I initially tested it, *asynchronous*
 replication kept breaking between the master and the standby, and I
 couldn't figure out why.  I finally realized that there was a ten
 second pause that had been inserting into the WAL receiver loop as a
 debugging tool which was allowing the standby to get far enough behind
 that the master was able to recycle WAL segments the standby still
 needed.  Under ordinary circumstances, I would say that a patch like
 this was not mature enough to submit for review, let alone commit.
 For that reason, I am pretty doubtful about the chances of getting
 this finished for 9.1 without some substantial prolongation of the
 schedule.

 That having been said, there is at least one part of this patch which
 looks to be in pretty good shape and seems independently useful
 regardless of what happens to the rest of it, and that is the code
 that sends replies from the standby back to the primary.  This allows
 pg_stat_replication to display the write/flush/apply log positions on
 the standby next to the sent position on the primary, which as far as
 I am concerned is pure gold.  Simon had this set up to happen only
 when synchronous replication or XID feedback in use, but I think
 people are going to want it even with plain old asynchronous
 replication, because it provides a FAR easier way to monitor standby
 lag than anything we have today.  I've extracted this portion of the
 patch, cleaned it up a bit, written docs, and attached it here.

+1. I haven't actually looked at the patch, but having this ability
would be *great*.

I also agree with the general idea of trying to break it into smaller
parts - even if they only provide small parts each on it's own. That
also makes it easier to get an overview of exactly how much is left,
to see where to focus.


 The only real complaint I can imagine about offering this
 functionality all the time is that it uses extra bandwidth.  I'm
 inclined to think that the ability to shut it off completely is
 sufficient answer to that complaint.

Yes, agreed.

I would usually not worry about the bandwidth, really, I'd be more
worried about potentially increasing latency somewhere.


 dons asbestos underwear

The ones with little rocketships on them?


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] REVIEW Range Types

2011-02-08 Thread Erik Rijkers
On Sun, February 6, 2011 07:41, Jeff Davis wrote:
 New patch. All known TODO items are closed, although I should do a
 cleanup pass over the code and docs.

 Fixed in this patch:

   * Many documentation improvements
   * Added INT8RANGE
   * Renamed PERIOD[TZ] - TS[TZ]RANGE
   * Renamed INTRANGE - INT4RANGE
   * Improved parser's handling of whitespace and quotes
   * Support for PL/pgSQL functions with ANYRANGE arguments/returns
   * Make subtype_float function no longer a requirement for GiST,
 but it should still be supplied for the penalty function to be
 useful.


I'm afraid even the review is WIP, but I thought I'd post what I have.

Context: At the moment we use postbio (see below) range functionality, to 
search ranges and
overlap in large DNA databases ('genomics').  We would be happy if a core data 
type could replace
that.  It does look like the present patch is ready to do those same tasks, of 
which the main one
for us is gist-indexed ranges. We also use btree_gist with that, so to include 
that in core would
make sense in this regard.


test config:

./ configure \
  --prefix=/var/data1/pg_stuff/pg_installations/pgsql.range_types \
  --with-pgport=6563 \
  --enable-depend \
  --enable-cassert \
  --enable-debug \
  --with-perl \
  --with-openssl \
  --with-libxml \
  --enable-dtrace

compile, make, check, install all OK.

--
Submission review:
--

* Is the patch in context diff format?

Yes.

* Does it apply cleanly to the current git master?

It applied cleanly. (after the large serialisation commit 2011.02.08 it will 
need some
changes/rebase)

* Does it include reasonable tests, necessary doc patches, etc?

Yes, there are many tests; the documentation is good. Small improvements below.

-
Usability review
-

Read what the patch is supposed to do, and consider:

* Does the patch actually implement that?

Yes.

* Do we want that?

Yes.

* Do we already have it?

contrib/seg has some similar functionalities: seg is a data type for 
representing line segments,
or floating point intervals.

And on pgFoundry there is a seg spin-off  postbio, tailored to genomic data 
(with gist-indexing).
(see postbio manual: http://postbio.projects.postgresql.org/ )

* Does it follow SQL spec, or the community-agreed behavior?

I don't know - I couldn't find much in the SQL-spec on a range datatype.

The ranges behaviour has been discussed on -hackers.

* Does it include pg_dump support (if applicable)?

dump/restore were fine in the handful of range-tables
which I moved between machines.

* Are there dangers?

Not that I could find.

* Have all the bases been covered?

I think the functionality looks fairly complete.

-
Feature test:
-

* Does the feature work as advertised?

The patch seems very stable.  My focus has been mainly on the intranges. I 
tested by parsing
documentation- and regression examples, and parametrising them in a perl 
harness, to generate many
thousands of range combinations.  I found only a single small problem (posted 
earlier - Jeff Davis
solved it already apparently).

see: http://archives.postgresql.org/pgsql-hackers/2011-02/msg00387.php

* Are there corner cases the author has failed to consider?

No.

* Are there any assertion failures or crashes?

I haven't seen a single one.

--
Documentation:
--

Section 9.18:
  table 9-42. range functions:
  The following functions are missing (I encountered them in the regression 
tests):
contained_by()
range_eq()

section 'Constructing Ranges' (8.16.6):
  In the code example, remove the following line:
-- the int4range result will appear in the canonical format
  it doesn't make sense there.  At this place canonical format has not been 
discussed;
maybe it is not even discussed anywhere.

also (same place):
   'where _ is used to mean exclusive and  is used to mean 
inclusive.'
should be:
   'where _ is used to mean exclusive and i is used to mean 
inclusive.'

And btw: it should mention here that the range*inf* functions,
an underscore to separate 'range' from the rest of the function name, e.g.:
   range_linfi_()  =  infinite lower bound,  inclusive upper bound


I still want to do Performance review and Coding review.


FWIW, I would like to repeat that my impression is that the patch is very 
stable, especially  with
regard to the intranges (tested extensively).


regards,

Erik Rijkers



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MVCC doc typo fix

2011-02-08 Thread Heikki Linnakangas

On 08.02.2011 20:40, Kevin Grittner wrote:

attached


Fixed.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread Robert Haas
On Tue, Feb 8, 2011 at 2:13 PM, David Fetter da...@fetter.org wrote:
 I agree that we have some problems in that area - particularly with
 writeable CTEs - but prolonging the schedule isn't going to fix that
 problem.

 What is?

I think the best solution would probably be to find corporate sponsors
for more PostgreSQL committers, or other senior community members who
might become committers if they had more time to spend on it.  The
problem is that there are basically two people who understand the
executor well enough to commit that patch: Tom, and Heikki.  And Tom
doesn't really like the feature (for reasons that are totally baffling
to me, BTW).  If I had three weeks to spend on that patch, it would be
in by now, and I'd know the executor a lot better too, which would
make things easier for the next guy who wants to do somethign of that
type.  But I (with some exceptions) don't get paid to commit other
people's patches, so that limits the amount of time I can spend on it
to (a) time when I'm not working on something that's a higher priority
for my employer and (b) evenings and weekends.  I invest as much of
that time as I can in community work (which is why I have nerd
tatooed on my forehead) but there's a limited amount of it, and I tend
to invest it in patches where I'm already somewhat familiar with the
relevant portion of the code, because that lets me get through more
patches, if not always the best patches.

 This development cycle was a change from other development cycles in
 that it began before development had closed in the previous cycle.
 I will not take a position at the moment as to the wisdom of making
 this change, but it's been clear from feedback from lots of
 developers, even ones who'd be expected to be in the know, that this
 vital piece of information did not gotten out in time.

The schedule was posted publicly, so I'm not sure how that
communication gap happened.

 Let's assume for the moment that we're going with overlapping
 development cycles into the future.  I'd submit that given the
 propagation delay of this change, the schedule for the first iteration
 of this never was reasonable, and slipping two months is a small
 price to pay for the huge flock of things we're epsilon short of
 getting.

I think you're being considerably over-optimistic about how close the
remaining patches are to commit, and even more optimistic about the
effects of another two months on the quality of the release.  Here's
my prediction: if we slip the schedule for two months, all the
pressure that now exists to get things wrapped up will disappear, and
we'll be back in approximately the same place two months from now.  A
few more things will have been committed, but at least as many new
patches will materialize, so that the remaining volume of work is the
same as before, or even larger.  We'll still end up punting the same
number of patches, but in the meantime we'll have managed to rush a
few things in that will destabilize the tree and require months of
extra beta time during which no new work will be able to be committed.

The point is this: We're not going to punt major patches because they
are close to commit but yet some arbitrary deadline has expired.
We're going to punt them because they're NOT close to commit and a
long-agreed deadline has expired.  I don't think I'd get much support
if I proposed punting everything that isn't yet committed at
2011-02-15 00:00:00, but it's just a mistake to think that if we only
wait another day or another week, we'll get it all done.  We've tried
that before and it usually doesn't work out.  The limiting factor
isn't primarily the amount of time that it takes to write the code;
it's the amount of motivation to get it done.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-08 Thread Robert Haas
On Tue, Feb 8, 2011 at 2:34 PM, Magnus Hagander mag...@hagander.net wrote:
 I would usually not worry about the bandwidth, really, I'd be more
 worried about potentially increasing latency somewhere.

The time to read and write the socket doesn't seem like it should be
significant, unless the network buffers fill up or I'm missing
something.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Named restore points

2011-02-08 Thread Simon Riggs
On Tue, 2011-02-08 at 14:07 -0300, Euler Taveira de Oliveira wrote:

 Because named restore point is a noop xlog record; besides, transaction and 
 time involves xlog records that contain data.

Committed. Thanks for the patch and the review.

I changed the patch to require wal_level  minimal, rather than
archive_mode = on.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Sync Rep plan from here

2011-02-08 Thread Simon Riggs

From here, I consider myself free and clear to work on Sync Rep as my
final contribution to this Commit Fest. There's still patches I'm
interested in, but priority and time means I won't be reviewing anything
further.

I'm mostly unreachable for next few days, but expect to be working on
Sync rep as much of next week as it takes.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-08 Thread Chris Browne
pg...@j-davis.com (Jeff Davis) writes:
 On Tue, 2011-02-08 at 06:57 -0500, Stephen Frost wrote:
 * Robert Haas (robertmh...@gmail.com) wrote:
  - Range Types.  This is a large patch which was submitted for the
  first time to the last CommitFest of the cycle, and the first version
  that had no open TODO items was posted yesterday, three-quarters of
  the way through that last CommitFest.  Some good review has been done.
   While more is probably needed, I think we should feel good about
  what's been accomplished and mark this one Returned with Feedback.
 
 I don't agree w/ punting Range Types.  Range Types were discussed as far
 back as the 2010 developer meeting, were discussed quite a bit again
 starting in October and throughout the fall, and Jeff has regularly
 been posting updates to it.  Given how thorough Jeff is, my feeling is
 that this patch is more than ready for beta.

 I appreciate the sentiment, but in addition to some cleanup, any patch
 like this at least requires some discussion. It's a language change
 we'll be supporting for a long time.

 At minimum, we're a couple hundred emails shy of a real consensus on the
 naming ;)

It's more than a bit sad...  The RangeType change has the massive merit
of enabling some substantial development changes, where we can get rid
of whole classes of comparison clauses, and hopefully whole classes of
range errors.  That was my favorite would-be feature for 9.1.

It'll take some time to get code changes into systems to use this; the
sooner the feature's in a deployable version of Postgres, the earlier
that kind of thing may start.
-- 
(format nil ~S@~S cbbrowne gmail.com)
http://www3.sympatico.ca/cbbrowne/internet.html
Colorless green ideas sleep furiously.
-- Noam Chomsky

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] updated patch for foreach stmt

2011-02-08 Thread Stephen Frost
Greetings,

* Pavel Stehule (pavel.steh...@gmail.com) wrote:
 I resend a patch with last update of this patch

Alright, so, like I said, I really like this feature and would like to
see it included.  To that end, I've done perhaps a bit more than a
review of the patch.  Pavel, if you could go over the changes I've made
and review them and let me know if you see any problems, I'd appreciate
it.  I've tried to get it ready for a committer as much as I can without
being one. :)

I moved the array iteration over into arrayfuncs.c, cleaned it up quite
a bit, cleaned up the pl/pgsql foreach function, improved the PL/PgSQL
documentation to understand FOREACH as another top-level command, added
comments all over the place, etc.

Passes all regressions too.

commit 19deaf69a4dabfa4a223a6dcd36570866ad0bd3c
Author: Stephen Frost sfr...@snowman.net
Date:   Tue Feb 8 15:15:48 2011 -0500

PL/PgSQL FOREACH cleanup

Define and rename element OID to be more consistant, ensure
that the right name gets returned in error messages, and fix
regression output to match new error message (grammar cleanup).

commit f88fd2ab5419f9a2784677038b3fb01053c69163
Merge: f191af1 8c6e3ad
Author: Stephen Frost sfr...@snowman.net
Date:   Tue Feb 8 14:28:18 2011 -0500

Merge branch 'master' of git://git.postgresql.org/git/postgresql into 
plpgsql_foreach

commit f191af16f9d3e5ae0072e61c1b58713040cc8d64
Author: Stephen Frost sfr...@snowman.net
Date:   Tue Feb 8 14:27:05 2011 -0500

PL/PgSQL FOREACH Minor Whitespace Cleanup

commit 612cf5485f202a49aec70cf32f74d19d0d130b6b
Author: Stephen Frost sfr...@snowman.net
Date:   Tue Feb 8 14:06:06 2011 -0500

Improving FOREACH, code and documentation

This patch moves and reworks much of the array iteration code
that FOREACH had been implemented with to be part of arrayfuncs.c
and exported through utils/array.h.  It also cleans up the error
handling and set up pieces of the FOREACH handling in pl_exec.c
Lastly, the documentation and comments are updated and improved.

commit 89058b79e43311e8f37af16c3fc17b622dc97578
Author: Stephen Frost sfr...@snowman.net
Date:   Sun Feb 6 14:14:04 2011 -0500

Add FOREACH top-level PL/PgSQL command

This patch adds a new top-level PL/PgSQL command called FOREACH which
is intended to be for iterating over multi-value variables.  This also
includes the first FOREACH type, an ARRAY iteration capability.

Patch by Pavel Stehule.

Thanks,

Stephen
*** a/doc/src/sgml/plpgsql.sgml
--- b/doc/src/sgml/plpgsql.sgml
***
*** 300,310  $$ LANGUAGE plpgsql;
  para
   All variables used in a block must be declared in the
   declarations section of the block.
!  (The only exceptions are that the loop variable of a literalFOR/ loop
!  iterating over a range of integer values is automatically declared as an
!  integer variable, and likewise the loop variable of a literalFOR/ loop
!  iterating over a cursor's result is automatically declared as a
!  record variable.)
  /para
  
  para
--- 300,308 
  para
   All variables used in a block must be declared in the
   declarations section of the block.
!  (The only exceptions are the loop variables of literalFOR/ and
!  literalFOREACH/ loops which are automatically declared as the
!  appropriate variable type to match what to loop is over.)
  /para
  
  para
***
*** 1359,1375  GET DIAGNOSTICS integer_var = ROW_COUNT;
  
listitem
 para
! A commandFOR/ statement sets literalFOUND/literal true
! if it iterates one or more times, else false.  This applies to
! all four variants of the commandFOR/ statement (integer
! commandFOR/ loops, record-set commandFOR/ loops,
! dynamic record-set commandFOR/ loops, and cursor
! commandFOR/ loops).
! literalFOUND/literal is set this way when the
! commandFOR/ loop exits; inside the execution of the loop,
! literalFOUND/literal is not modified by the
! commandFOR/ statement, although it might be changed by the
! execution of other statements within the loop body.
 /para
/listitem
listitem
--- 1357,1375 
  
listitem
 para
! A commandFOR/ or commandFOREACH/ statement sets
! literalFOUND/literal to true if it iterates one or more times,
! else to false.  This applies to all four variants of the
! commandFOR/ statement (integer commandFOR/ loops, record-set
! commandFOR/ loops, dynamic record-set commandFOR/ loops, and
! cursor commandFOR/ loops) and all variants of the
! commandFOREACH/ statement (currently only ARRAY
! commandFOREACH/ loops).  literalFOUND/literal is set this

Re: [HACKERS] Sync Rep for 2011CF1

2011-02-08 Thread Joshua D. Drake
On Tue, 2011-02-08 at 13:53 -0500, Robert Haas wrote:

 That having been said, there is at least one part of this patch which
 looks to be in pretty good shape and seems independently useful
 regardless of what happens to the rest of it, and that is the code
 that sends replies from the standby back to the primary.  This allows
 pg_stat_replication to display the write/flush/apply log positions on
 the standby next to the sent position on the primary, which as far as
 I am concerned is pure gold.  Simon had this set up to happen only
 when synchronous replication or XID feedback in use, but I think
 people are going to want it even with plain old asynchronous
 replication, because it provides a FAR easier way to monitor standby
 lag than anything we have today.  I've extracted this portion of the
 patch, cleaned it up a bit, written docs, and attached it here.

Score! +1

JD

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] updated patch for foreach stmt

2011-02-08 Thread Robert Haas
On Tue, Feb 8, 2011 at 3:26 PM, Stephen Frost sfr...@snowman.net wrote:
 Greetings,

 * Pavel Stehule (pavel.steh...@gmail.com) wrote:
 I resend a patch with last update of this patch

 Alright, so, like I said, I really like this feature and would like to
 see it included.  To that end, I've done perhaps a bit more than a
 review of the patch.  Pavel, if you could go over the changes I've made
 and review them and let me know if you see any problems, I'd appreciate
 it.  I've tried to get it ready for a committer as much as I can without
 being one. :)

Amen to that!

I think the syntax Tom suggested before was FOREACH thingy IN ARRAY
arr rather than just FOREACH thingy IN arr.  That's probably a good
idea, because it gives us an escape hatch against needing to invent
yet another variant of this syntax - the word immediately following IN
can be known with confidence to be intended as a keyword rather than
as part of the expression.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions support for pg_dump, patch v27

2011-02-08 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 Yeah, I deleted that relocatable test because it's redundant:
 control-schema cannot be set for a relocatable extension,
 cf the test in read_extension_control_file.

I missed that you kept this test in your version of the patch.  Sorry
for noise.

Regardsm
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Named restore points

2011-02-08 Thread Thom Brown
On 8 February 2011 19:53, Simon Riggs si...@2ndquadrant.com wrote:
 On Tue, 2011-02-08 at 14:07 -0300, Euler Taveira de Oliveira wrote:

 Because named restore point is a noop xlog record; besides, transaction and
 time involves xlog records that contain data.

 Committed. Thanks for the patch and the review.

 I changed the patch to require wal_level  minimal, rather than
 archive_mode = on.

This could do with a bit more documentation about usage.  Below the
Backup Control Functions table
(http://developer.postgresql.org/pgdocs/postgres/functions-admin.html#FUNCTIONS-ADMIN-BACKUP-TABLE),
each function has a paragraph detailing what it does.

Also, I notice you can easily write over a label.  The case I'm
thinking of is someone in psql creating a named restore point, then
later on, they go in again, accidentally cursor up and select the
previous statement and create it again.  Would this mean that the
previous label is lost, or would it be the case that any subsequent
duplicate labels would have no effect unless the WAL files with the
original label in were consumed?  In either case, a note in the docs
about this would be useful.

And I don't see these label creations getting logged either.  Could we
output that to the log because at least then users can grep the
directory for labels, and, in most cases, the time they occurred?

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions versus pg_upgrade

2011-02-08 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 that they might be out on disk.  Suppose that you have the cube
 extension installed and there are some user tables containing columns of
 type cube[].  Those arrays are going to have type cube's OID embedded in
 them.  If cube has a different OID after pg_upgrade then the arrays are
 broken.

Forgot about the internals of arrays.  Sure, that's a problem here.

 Even letting an extension script run and create data types that weren't
 there at all before is problematic, since those types could easily claim
 OIDs that need to be reserved for pre-existing types that appear later
 in the dump script.

 Similar issues arise for the other cases where pg_upgrade is forcing OID
 assignments; it's not doing that just for fun.

There's provision to forcing the OID of an extension at CREATE EXTENSION
time in the UPGRADE patch, but it does not handle the extension
objects.

I though system OIDs and user OIDs can't clash because of a 16384
counter magic assignment at initdb, so I'm having a hard time getting to
understand exactly the problem case.  Will spend time on it tomorrow, if
that still helps.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-08 Thread Dan Ports
One other nit re. the predicate lock table size GUCs: the out-of-memory
case in RegisterPredicateLockingXid (predicate.c:1592 in my tree) gives
the hint to increase max_predicate_locks_per_transaction. I don't think
that's correct, since that GUC isn't used to size SerializableXidHash.

In fact, that error shouldn't arise at all because if there was room in
PredXact to register the transaction, then there should be room to
register it's xid in SerializableXidHash. Except that it's possible for
something else to allocate all of our shared memory and thus prevent
SerializbleXidHash from reaching its intended max capacity.

In general, it might be worth considering making a HTAB's max_size a
hard limit, but that's a larger issue. Here, it's probably worth just
removing the hint.

Dan

-- 
Dan R. K. Ports  MIT CSAILhttp://drkp.net/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions versus pg_upgrade

2011-02-08 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 Personally I find the extension stuff to be a bigger deal than anything
 else remaining in the commitfest.  Also, I've fixed a number of
 pre-existing bugs in passing, and I'd have to go extract those changes
 out of the current extensions patch if we abandon it now.  So I'd like
 to keep pushing ahead on this.

Stealing words from Kevin, WOO-HOO :)  I'll try to continue devote as
much time as I did already to assist as much as I can here.

 After further reflection I think that the pg_upgrade situation can be
 handled like this:

 1. Provide a way for CREATE EXTENSION to not run the script --- either
 add a grammar option for that, or just have a back-door flag that
 pg_upgrade can set via one of its special kluge functions.  (Hm,
 actually we'd want it to install the old version number and comment
 too, so maybe just have a kluge function to create a pg_extension
 entry that agrees with the old entry.)

In the upgrade patch there's provision for not running the script:

  CREATE WRAPPER EXTENSION foo;

That's pretty useful indeed.  What it does now is read the control file
to init the comment and other fields, but extversion is forced NULL.
That allows to have special rules in how UPGRADE will pick a script.

There's even code to do 

  CREATE WRAPPER EXTENSION foo WITH SYSID 12345;

We could add version and comment here for purposes of pg_upgrade.

 2. Invent a command ALTER EXTENSION extname ADD object-description
 that simply adds a pg_depend entry marking an existing object as
 belonging to the extension.  I think this was going to be part of the
 plan anyway for ALTER EXTENSION UPGRADE, specifically we need that for
 the bootstrap case of collecting some loose pre-existing objects into
 an extension.

In the upgrade patch, that's spelled ALTER OBJECT foo SET EXTENSION bar;
and does exactly what you're proposing.

 3. In --binary-upgrade mode, pg_dump doesn't suppress the individual
 member objects, but instead emits ALTER EXTENSION ADD after creating
 each member object.

 So that gets us to the point of being able to run pg_upgrade without
 changing the contents of the extension.  After that we can look at
 ALTER EXTENSION UPGRADE.

Well, or begin there as the code you need is already written.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] updated patch for foreach stmt

2011-02-08 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
 Amen to that!

Hopefully it helped. :)

 I think the syntax Tom suggested before was FOREACH thingy IN ARRAY
 arr rather than just FOREACH thingy IN arr.  That's probably a good
 idea, because it gives us an escape hatch against needing to invent
 yet another variant of this syntax - the word immediately following IN
 can be known with confidence to be intended as a keyword rather than
 as part of the expression.

Alright, alright, *I* don't care that much, though I do feel it's a bit
excessive.  Updated patch against HEAD attached.

commit a5d32fa41fbbbd9ace465f62be714366990061d4
Author: Stephen Frost sfr...@snowman.net
Date:   Tue Feb 8 15:57:40 2011 -0500

PL/PgSQL FOREACH - Add ARRAY keyword

Add ARRAY as required after IN when using FOREACH, to
future-proof against later kinds of FOREACH commands.

Thanks,

Stephen
*** a/doc/src/sgml/plpgsql.sgml
--- b/doc/src/sgml/plpgsql.sgml
***
*** 300,310  $$ LANGUAGE plpgsql;
  para
   All variables used in a block must be declared in the
   declarations section of the block.
!  (The only exceptions are that the loop variable of a literalFOR/ loop
!  iterating over a range of integer values is automatically declared as an
!  integer variable, and likewise the loop variable of a literalFOR/ loop
!  iterating over a cursor's result is automatically declared as a
!  record variable.)
  /para
  
  para
--- 300,308 
  para
   All variables used in a block must be declared in the
   declarations section of the block.
!  (The only exceptions are the loop variables of literalFOR/ and
!  literalFOREACH/ loops which are automatically declared as the
!  appropriate variable type to match what to loop is over.)
  /para
  
  para
***
*** 1359,1375  GET DIAGNOSTICS integer_var = ROW_COUNT;
  
listitem
 para
! A commandFOR/ statement sets literalFOUND/literal true
! if it iterates one or more times, else false.  This applies to
! all four variants of the commandFOR/ statement (integer
! commandFOR/ loops, record-set commandFOR/ loops,
! dynamic record-set commandFOR/ loops, and cursor
! commandFOR/ loops).
! literalFOUND/literal is set this way when the
! commandFOR/ loop exits; inside the execution of the loop,
! literalFOUND/literal is not modified by the
! commandFOR/ statement, although it might be changed by the
! execution of other statements within the loop body.
 /para
/listitem
listitem
--- 1357,1375 
  
listitem
 para
! A commandFOR/ or commandFOREACH/ statement sets
! literalFOUND/literal to true if it iterates one or more times,
! else to false.  This applies to all four variants of the
! commandFOR/ statement (integer commandFOR/ loops, record-set
! commandFOR/ loops, dynamic record-set commandFOR/ loops, and
! cursor commandFOR/ loops) and all variants of the
! commandFOREACH/ statement (currently only ARRAY
! commandFOREACH/ loops).  literalFOUND/literal is set this
! way when the commandFOR/ or commandFOREACH/ loop exits;
! inside the execution of the loop, literalFOUND/literal is not
! modified by the commandFOR/ or commandFOREACH/ statement,
! although it might be changed by the execution of other statements
! within the loop body.
 /para
/listitem
listitem
***
*** 1910,1918  END CASE;
  
  para
   With the literalLOOP/, literalEXIT/,
!  literalCONTINUE/, literalWHILE/, and literalFOR/
!  statements, you can arrange for your applicationPL/pgSQL/
!  function to repeat a series of commands.
  /para
  
  sect3
--- 1910,1918 
  
  para
   With the literalLOOP/, literalEXIT/,
!  literalCONTINUE/, literalWHILE/, literalFOR/,
!  and literalFOREACH/ statements, you can arrange for your
!  applicationPL/pgSQL/ function to repeat a series of commands.
  /para
  
  sect3
***
*** 2238,2243  END LOOP optional replaceablelabel/replaceable /optional;
--- 2238,2285 
  /para
 /sect2
  
+sect2 id=plpgsql-array-foreach-loop
+ titleLooping Through Arrays/title
+ 
+ para
+ Similar to a literalFOR/ loop is the literalFOREACH/ loop.
+ literalFOREACH/ is used to loop over multi-value variables, such
+ as ARRAYs.  Other multi-value variables may be added to literalFOREACH/
+ later.  Note that literalFOREACH/ can be thought of horizantally
+ looping, whereas literalFOR/ can be thought of a vertical loop.
+ The literalFOREACH/ statement to loop over an 

Re: [HACKERS] updated patch for foreach stmt

2011-02-08 Thread Pavel Stehule
2011/2/8 Stephen Frost sfr...@snowman.net:
 Greetings,

 * Pavel Stehule (pavel.steh...@gmail.com) wrote:
 I resend a patch with last update of this patch

 Alright, so, like I said, I really like this feature and would like to
 see it included.  To that end, I've done perhaps a bit more than a
 review of the patch.  Pavel, if you could go over the changes I've made
 and review them and let me know if you see any problems, I'd appreciate
 it.  I've tried to get it ready for a committer as much as I can without
 being one. :)

 I moved the array iteration over into arrayfuncs.c, cleaned it up quite
 a bit, cleaned up the pl/pgsql foreach function, improved the PL/PgSQL
 documentation to understand FOREACH as another top-level command, added
 comments all over the place, etc.


It's looking well - thank you.

There is only bad keywords in doc - SCALE instead SLICE and a maybe a
usage of slicing need a example.

It is nice.

Regards

Pavel Stehule

 Passes all regressions too.

 commit 19deaf69a4dabfa4a223a6dcd36570866ad0bd3c
 Author: Stephen Frost sfr...@snowman.net
 Date:   Tue Feb 8 15:15:48 2011 -0500

    PL/PgSQL FOREACH cleanup

    Define and rename element OID to be more consistant, ensure
    that the right name gets returned in error messages, and fix
    regression output to match new error message (grammar cleanup).

 commit f88fd2ab5419f9a2784677038b3fb01053c69163
 Merge: f191af1 8c6e3ad
 Author: Stephen Frost sfr...@snowman.net
 Date:   Tue Feb 8 14:28:18 2011 -0500

    Merge branch 'master' of git://git.postgresql.org/git/postgresql into 
 plpgsql_foreach

 commit f191af16f9d3e5ae0072e61c1b58713040cc8d64
 Author: Stephen Frost sfr...@snowman.net
 Date:   Tue Feb 8 14:27:05 2011 -0500

    PL/PgSQL FOREACH Minor Whitespace Cleanup

 commit 612cf5485f202a49aec70cf32f74d19d0d130b6b
 Author: Stephen Frost sfr...@snowman.net
 Date:   Tue Feb 8 14:06:06 2011 -0500

    Improving FOREACH, code and documentation

    This patch moves and reworks much of the array iteration code
    that FOREACH had been implemented with to be part of arrayfuncs.c
    and exported through utils/array.h.  It also cleans up the error
    handling and set up pieces of the FOREACH handling in pl_exec.c
    Lastly, the documentation and comments are updated and improved.

 commit 89058b79e43311e8f37af16c3fc17b622dc97578
 Author: Stephen Frost sfr...@snowman.net
 Date:   Sun Feb 6 14:14:04 2011 -0500

    Add FOREACH top-level PL/PgSQL command

    This patch adds a new top-level PL/PgSQL command called FOREACH which
    is intended to be for iterating over multi-value variables.  This also
    includes the first FOREACH type, an ARRAY iteration capability.

    Patch by Pavel Stehule.

                Thanks,

                        Stephen

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.10 (GNU/Linux)

 iEYEARECAAYFAk1Rpu8ACgkQrzgMPqB3kiiuTQCfdY8Cwg5DVuvu2xKpcv6M7QQ1
 +TwAnR5ZFXsGdAwgHwQEprcYIlp8t0wy
 =DAjZ
 -END PGP SIGNATURE-



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] updated patch for foreach stmt

2011-02-08 Thread Pavel Stehule
2011/2/8 Stephen Frost sfr...@snowman.net:
 * Robert Haas (robertmh...@gmail.com) wrote:
 Amen to that!

 Hopefully it helped. :)

 I think the syntax Tom suggested before was FOREACH thingy IN ARRAY
 arr rather than just FOREACH thingy IN arr.  That's probably a good
 idea, because it gives us an escape hatch against needing to invent
 yet another variant of this syntax - the word immediately following IN
 can be known with confidence to be intended as a keyword rather than
 as part of the expression.

 Alright, alright, *I* don't care that much, though I do feel it's a bit
 excessive.  Updated patch against HEAD attached.

I am thinking so it is good idea. Even I have no plans to expand
plpgsql in next year :), it really opening a doors for later changes.
And it's more secure - we can check if parameter is really array or
some else.

Regards

Pavel Stehule


 commit a5d32fa41fbbbd9ace465f62be714366990061d4
 Author: Stephen Frost sfr...@snowman.net
 Date:   Tue Feb 8 15:57:40 2011 -0500

    PL/PgSQL FOREACH - Add ARRAY keyword

    Add ARRAY as required after IN when using FOREACH, to
    future-proof against later kinds of FOREACH commands.

        Thanks,

                Stephen

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.10 (GNU/Linux)

 iEYEARECAAYFAk1RrtUACgkQrzgMPqB3kigt6gCffjFcE4ddo76ECj+kB+iaM7DV
 c2UAnRDMh1B7r+4ZrnJtIeoRUXJy42+f
 =ZwQa
 -END PGP SIGNATURE-



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >