Re: [HACKERS] fallocate / posix_fallocate for new WAL file creation (etc...)

2013-07-04 Thread Jeff Davis
On Mon, 2013-07-01 at 00:52 -0400, Greg Smith wrote:
> On 6/30/13 9:28 PM, Jon Nelson wrote:
> > The performance of the latter (new) test sometimes seems to perform
> > worse and sometimes seems to perform better (usually worse) than
> > either of the other two. In all cases, posix_fallocate performs
> > better, but I don't have a sufficiently old kernel to test with.
> 
> This updated test program looks reliable now.  The numbers are very 
> tight when I'd expect them to be, and there's nowhere with the huge 
> differences I saw in the earlier test program.
> 
> Here's results from a few sets of popular older platforms:

...

> The win for posix_fallocate is there in most cases, but it's pretty hard 
> to see in these older systems.  That could be OK.  As long as the 
> difference is no more than noise, and that is the case, this could be 
> good enough to commit.  If there are significantly better results on the 
> new platforms, the old ones need to just not get worse.

I ran my own little test on my workstation[1] with the attached
programs. One does what we do now, another mimics the glibc emulation
you described earlier, and another uses posix_fallocate(). It does an
allocation phase, an fsync, a single rewrite, and then another fsync.
The program runs this 64 times for 64 different 16MB files.

write1 and write2 are almost exactly even at 25.5s. write3 is about
14.5s, which is a pretty big improvement.

So, my simple conclusion is that glibc emulation should be about the
same as what we're doing now, so there's no reason to avoid it. That
means, if posix_fallocate() is present, we should use it, because it's
either the same (if emulated in glibc) or significantly faster (if
implemented in the kernel).

If that is your conclusion, as well, it looks like this patch is about
ready for commit. What do you think?

Regards,
Jeff Davis

[1] workstation specs (ubuntu 12.10, ext4):
$ uname -a
Linux jdavis 3.5.0-34-generic #55-Ubuntu SMP Thu Jun 6 20:18:19 UTC 2013
x86_64 x86_64 x86_64 GNU/Linux


#include 
#include 
#include 

char buf[8192] = {0};

int main()
{
	int i;
	int filenum;
	char filename[64];

	for (filenum = 0; filenum < 64; filenum++)
		{
			sprintf(filename, "/tmp/afile.%02d", filenum);
			int fd = open(filename, O_CREAT | O_EXCL | O_WRONLY, 0600);

			// allocate all 16MB
			for(i = 0; i < 2048; i++)
{
	write(fd, buf, 8192);
}

			fsync(fd);

			lseek(fd, 0, SEEK_SET);

			// now overwrite it
			for(i = 0; i < 2048; i++)
{
	write(fd, buf, 8192);
}

			fsync(fd);

			close(fd);
		}

	return 0;
}

#include 
#include 
#include 

char buf[8192] = {0};

int main()
{
	int i;
	int filenum;
	char filename[64];

	for (filenum = 0; filenum < 64; filenum++)
		{
			sprintf(filename, "/tmp/afile.%02d", filenum);
			int fd = open(filename, O_CREAT | O_EXCL | O_WRONLY, 0600);

			// allocate all 16MB
			posix_fallocate(fd, 0, 16 * 1024 * 1024);

			fsync(fd);

			lseek(fd, 0, SEEK_SET);

			// now overwrite it
			for(i = 0; i < 2048; i++)
{
	write(fd, buf, 8192);
}

			fsync(fd);
			close(fd);
		}

	return 0;
}

#include 
#include 
#include 

char buf[8192] = {0};

int main()
{
	int i;
	int filenum;
	char filename[64];

	for (filenum = 0; filenum < 64; filenum++)
		{
			sprintf(filename, "/tmp/afile.%02d", filenum);
			int fd = open(filename, O_CREAT | O_EXCL | O_WRONLY, 0600);

			// allocate all 16MB
			for(i = 0; i < 4096; i++)
{
	pwrite(fd, "\0", 4, 4096*(i+1) - 1);
}

			fsync(fd);

			lseek(fd, 0, SEEK_SET);

			// now overwrite it
			for(i = 0; i < 2048; i++)
{
	write(fd, buf, 8192);
}

			fsync(fd);

			close(fd);
		}

	return 0;
}

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal - Support for National Characters functionality

2013-07-04 Thread Arulappan, Arul Shaji


> -Original Message-
> From: Claudio Freire [mailto:klaussfre...@gmail.com]
> Sent: Friday, 5 July 2013 3:41 PM
> To: Tatsuo Ishii
> Cc: Arulappan, Arul Shaji; PostgreSQL-Dev
> Subject: Re: [HACKERS] Proposal - Support for National Characters
> functionality
> 
> On Fri, Jul 5, 2013 at 2:02 AM, Tatsuo Ishii 
wrote:
> >> - Support for NATIONAL_CHARACTER_SET GUC variable that will
determine
> >> the encoding that will be used in NCHAR/NVARCHAR columns.
> >
> > You said NCHAR's encoding is UTF-8. Why do you need the GUC if
NCHAR's
> > encoding is fixed to UTF-8?
> 
> 
> Not only that, but I don't think it can be a GUC. Maybe a compile-time
switch,
> but if it were a GUC, how do you handle an existing database in UTF-8
when the
> setting is switched to UTF-16? Re-encode everything?
> Store the encoding along each value? It's a mess.
> 
> Either fix it at UTF-8, or make it a compile-time thing, I'd say.

Agreed, that to begin with we only support UTF-8 encoding for NCHAR
columns. If that is the case, do we still need a compile time option to
turn on/off NCHAR functionality ? ?

Rgds,
Arul Shaji




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal - Support for National Characters functionality

2013-07-04 Thread Arulappan, Arul Shaji
Ishii san,

Thank you for your positive and early response.

> -Original Message-
> From: Tatsuo Ishii [mailto:is...@postgresql.org]
> Sent: Friday, 5 July 2013 3:02 PM
> To: Arulappan, Arul Shaji
> Cc: pgsql-hackers@postgresql.org
> Subject: Re: [HACKERS] Proposal - Support for National Characters
> functionality
> 
> Arul Shaji,
> 
> NCHAR support is on our TODO list for some time and I would like to
welcome
> efforts trying to implement it. However I have a few
> questions:
> 
> > This is a proposal to implement functionalities for the handling of
> > National Characters.
> >
> > [Introduction]
> >
> > The aim of this proposal is to eventually have a way to represent
> > 'National Characters' in a uniform way, even in non-UTF8 encoded
> > databases. Many of our customers in the Asian region who are now, as
> > part of their platform modernization, are moving away from
mainframes
> > where they have used National Characters representation in COBOL and
> > other databases. Having stronger support for national characters
> > representation will also make it easier for these customers to look
at
> > PostgreSQL more favourably when migrating from other well known
RDBMSs
> > who all have varying degrees of NCHAR/NVARCHAR support.
> >
> > [Specifications]
> >
> > Broadly speaking, the national characters implementation ideally
will
> > include the following
> > - Support for NCHAR/NVARCHAR data types
> > - Representing NCHAR and NVARCHAR columns in UTF-8 encoding in
> > non-UTF8 databases
> 
> I think this is not a trivial work because we do not have framework to
allow
> mixed encodings in a database. I'm interested in how you are going to
solve
> the problem.
> 

I would be lying if I said I have the design already speced out. I will
be working on this in the coming weeks and hope to design a working
solution consulting with the community.

> > - Support for UTF16 column encoding and representing NCHAR and
> > NVARCHAR columns in UTF16 encoding in all databases.
> 
> Why do yo need UTF-16 as the database encoding? UTF-8 is already
supported,
> and any UTF-16 character can be represented in UTF-8 as far as I know.
> 

Yes, that's correct. However there are advantages in using UTF-16
encoding for those characters that are always going to take atleast
two-bytes to represent. 

Having said that, my intention is to use UTF-8 for NCHAR as well.
Supporting UTF-16 will be even more complicated as it is not supported
natively in some Linux platforms. I only included it to give an option.

> > - Support for NATIONAL_CHARACTER_SET GUC variable that will
determine
> > the encoding that will be used in NCHAR/NVARCHAR columns.
> 
> You said NCHAR's encoding is UTF-8. Why do you need the GUC if NCHAR's
> encoding is fixed to UTF-8?
> 

If we are going to only support UTF-8 for NCHAR, then we don't need the
GUC variable obviously.

Rgds,
Arul Shaji



> > The above points are at the moment a 'wishlist' only. Our aim is to
> > tackle them one-by-one as we progress. I will send a detailed
proposal
> > later with more technical details.
> >
> > The main aim at the moment is to get some feedback on the above to
> > know if this feature is something that would benefit PostgreSQL in
> > general, and if users maintaining DBs in non-English speaking
regions
> > will find this beneficial.
> >
> > Rgds,
> > Arul Shaji
> >
> >
> >
> > P.S.: It has been quite some time since I send a correspondence to
> > this list. Our mail server adds a standard legal disclaimer to all
> > outgoing mails, which I know that this list is not a huge fan of. I
> > used to have an exemption for the mails I send to this list. If the
> > disclaimer appears, apologies in advance. I will rectify that on the
next
> one.
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese: http://www.sraoss.co.jp




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal - Support for National Characters functionality

2013-07-04 Thread Claudio Freire
On Fri, Jul 5, 2013 at 2:02 AM, Tatsuo Ishii  wrote:
>> - Support for NATIONAL_CHARACTER_SET GUC variable that will determine
>> the encoding that will be used in NCHAR/NVARCHAR columns.
>
> You said NCHAR's encoding is UTF-8. Why do you need the GUC if NCHAR's
> encoding is fixed to UTF-8?


Not only that, but I don't think it can be a GUC. Maybe a compile-time
switch, but if it were a GUC, how do you handle an existing database
in UTF-8 when the setting is switched to UTF-16? Re-encode everything?
Store the encoding along each value? It's a mess.

Either fix it at UTF-8, or make it a compile-time thing, I'd say.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal - Support for National Characters functionality

2013-07-04 Thread Tatsuo Ishii
Arul Shaji,

NCHAR support is on our TODO list for some time and I would like to
welcome efforts trying to implement it. However I have a few
questions:

> This is a proposal to implement functionalities for the handling of
> National Characters. 
> 
> [Introduction]
> 
> The aim of this proposal is to eventually have a way to represent
> 'National Characters' in a uniform way, even in non-UTF8 encoded
> databases. Many of our customers in the Asian region who are now, as
> part of their platform modernization, are moving away from mainframes
> where they have used National Characters representation in COBOL and
> other databases. Having stronger support for national characters
> representation will also make it easier for these customers to look at
> PostgreSQL more favourably when migrating from other well known RDBMSs
> who all have varying degrees of NCHAR/NVARCHAR support.
> 
> [Specifications]
> 
> Broadly speaking, the national characters implementation ideally will
> include the following 
> - Support for NCHAR/NVARCHAR data types
> - Representing NCHAR and NVARCHAR columns in UTF-8 encoding in non-UTF8
> databases

I think this is not a trivial work because we do not have framework to
allow mixed encodings in a database. I'm interested in how you are
going to solve the problem.

> - Support for UTF16 column encoding and representing NCHAR and NVARCHAR
> columns in UTF16 encoding in all databases.

Why do yo need UTF-16 as the database encoding? UTF-8 is already
supported, and any UTF-16 character can be represented in UTF-8 as far
as I know.

> - Support for NATIONAL_CHARACTER_SET GUC variable that will determine
> the encoding that will be used in NCHAR/NVARCHAR columns.

You said NCHAR's encoding is UTF-8. Why do you need the GUC if NCHAR's
encoding is fixed to UTF-8?

> The above points are at the moment a 'wishlist' only. Our aim is to
> tackle them one-by-one as we progress. I will send a detailed proposal
> later with more technical details.
> 
> The main aim at the moment is to get some feedback on the above to know
> if this feature is something that would benefit PostgreSQL in general,
> and if users maintaining DBs in non-English speaking regions will find
> this beneficial.
> 
> Rgds,
> Arul Shaji
> 
> 
> 
> P.S.: It has been quite some time since I send a correspondence to this
> list. Our mail server adds a standard legal disclaimer to all outgoing
> mails, which I know that this list is not a huge fan of. I used to have
> an exemption for the mails I send to this list. If the disclaimer
> appears, apologies in advance. I will rectify that on the next one.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [COMMITTERS] pgsql: Add new GUC, max_worker_processes, limiting number of bgworkers.

2013-07-04 Thread Kevin Hale Boyes
The change to config.sgml contains a small typo with the double r in the
xreflabel.

+  


Re: [HACKERS] Grouping Sets

2013-07-04 Thread Dev Kumkar
On Thu, Jul 4, 2013 at 7:53 PM, Atri Sharma  wrote:

> On Thu, Jul 4, 2013 at 6:56 PM, Dev Kumkar 
> wrote:
> > Ok, no problem. Will await for any other pointers regarding any related
> > patch here.
> >
> > Currently using UNION to archive similar results but looking if anything
> is
> > already done here.
> >
> > Looks like GROUPING SET was in the TODO list long back.
> >
> http://grokbase.com/t/postgresql/pgsql-general/06aaa4g7cq/cube-rollup-grouping-sets
> >
> > Am I missing anything here?
> >
> > Regards...
>
> Me and RhodiumToad discussed the idea recently, after David Fetter
> suggested that we work on it. We may start work on it soon, haven't
> thought in detail yet though.


Ok, 9.3 feature wise looks all done.

So I believe it will be in any 9.3 + release?

Till then will continue UNION approach as looks like it gives the necessary
functionality. Any loopholes here friends?

Regards...


[HACKERS] Proposal - Support for National Characters functionality

2013-07-04 Thread Arulappan, Arul Shaji
This is a proposal to implement functionalities for the handling of
National Characters. 

[Introduction]

The aim of this proposal is to eventually have a way to represent
'National Characters' in a uniform way, even in non-UTF8 encoded
databases. Many of our customers in the Asian region who are now, as
part of their platform modernization, are moving away from mainframes
where they have used National Characters representation in COBOL and
other databases. Having stronger support for national characters
representation will also make it easier for these customers to look at
PostgreSQL more favourably when migrating from other well known RDBMSs
who all have varying degrees of NCHAR/NVARCHAR support.

[Specifications]

Broadly speaking, the national characters implementation ideally will
include the following 
- Support for NCHAR/NVARCHAR data types
- Representing NCHAR and NVARCHAR columns in UTF-8 encoding in non-UTF8
databases
- Support for UTF16 column encoding and representing NCHAR and NVARCHAR
columns in UTF16 encoding in all databases.
- Support for NATIONAL_CHARACTER_SET GUC variable that will determine
the encoding that will be used in NCHAR/NVARCHAR columns.

The above points are at the moment a 'wishlist' only. Our aim is to
tackle them one-by-one as we progress. I will send a detailed proposal
later with more technical details.

The main aim at the moment is to get some feedback on the above to know
if this feature is something that would benefit PostgreSQL in general,
and if users maintaining DBs in non-English speaking regions will find
this beneficial.

Rgds,
Arul Shaji



P.S.: It has been quite some time since I send a correspondence to this
list. Our mail server adds a standard legal disclaimer to all outgoing
mails, which I know that this list is not a huge fan of. I used to have
an exemption for the mails I send to this list. If the disclaimer
appears, apologies in advance. I will rectify that on the next one.











-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSOC13 proposal - extend RETURNING syntax

2013-07-04 Thread Karol Trzcionka
Hello,
according to my mentor's suggestion, I send first PoC patch of
"RETURNING AFTER/BEFORE" statement. Some info:
- it is early version - more hack PoC than RC patch
- AFTER in this version works as decribed before but it returns row
after update but before post-update before (to be fixed or switch with
current "*" behaviour - I don't know yet)
- patch passes all regression tests
- the code is uncommented already, to be fixed
I'm not sure what may fail so you use it on your risk ;)
Regards,
Karol
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index b2183f4..2698535 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -2351,6 +2351,7 @@ _outRangeTblEntry(StringInfo str, const RangeTblEntry *node)
 	switch (node->rtekind)
 	{
 		case RTE_RELATION:
+		case RTE_BEFORE:
 			WRITE_OID_FIELD(relid);
 			WRITE_CHAR_FIELD(relkind);
 			break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 3a16e9d..04931ee 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -1189,6 +1189,7 @@ _readRangeTblEntry(void)
 	switch (local_node->rtekind)
 	{
 		case RTE_RELATION:
+		case RTE_BEFORE:
 			READ_OID_FIELD(relid);
 			READ_CHAR_FIELD(relkind);
 			break;
diff --git a/src/backend/optimizer/plan/initsplan.c b/src/backend/optimizer/plan/initsplan.c
index 839ed9d..7fc6ea1 100644
--- a/src/backend/optimizer/plan/initsplan.c
+++ b/src/backend/optimizer/plan/initsplan.c
@@ -174,9 +174,16 @@ add_vars_to_targetlist(PlannerInfo *root, List *vars,
 		if (IsA(node, Var))
 		{
 			Var		   *var = (Var *) node;
-			RelOptInfo *rel = find_base_rel(root, var->varno);
+			RelOptInfo *rel;
 			int			attno = var->varattno;
+			RangeTblEntry *rte;
 
+			if (root->parse->commandType == CMD_UPDATE){
+rte = ((RangeTblEntry *) list_nth(root->parse->rtable, (var->varno)-1));
+if(rte->rtekind == RTE_BEFORE)
+	continue;
+			}
+			rel = find_base_rel(root, var->varno);
 			Assert(attno >= rel->min_attr && attno <= rel->max_attr);
 			attno -= rel->min_attr;
 			if (rel->attr_needed[attno] == NULL)
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index d80c264..77b0c38 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -1989,6 +1989,9 @@ preprocess_rowmarks(PlannerInfo *root)
 		if (rte->relkind == RELKIND_FOREIGN_TABLE)
 			continue;
 
+		if (rte->relkind == RELKIND_BEFORE)
+			continue;
+
 		rels = bms_del_member(rels, rc->rti);
 
 		newrc = makeNode(PlanRowMark);
@@ -2028,6 +2031,9 @@ preprocess_rowmarks(PlannerInfo *root)
 		if (!bms_is_member(i, rels))
 			continue;
 
+		if (rte->relkind == RELKIND_BEFORE)
+			continue;
+
 		newrc = makeNode(PlanRowMark);
 		newrc->rti = newrc->prti = i;
 		newrc->rowmarkId = ++(root->glob->lastRowMarkId);
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index b78d727..6828a7d 100644
--- a/src/backend/optimizer/plan/setrefs.c
+++ b/src/backend/optimizer/plan/setrefs.c
@@ -1691,6 +1691,12 @@ fix_join_expr_mutator(Node *node, fix_join_expr_context *context)
 	if (IsA(node, Var))
 	{
 		Var		   *var = (Var *) node;
+		if (var->varno<=list_length(context->root->parse->rtable) && var->varno>1 && context->root->parse->commandType == CMD_UPDATE){
+			RangeTblEntry *rte_a,*rte_b;
+			rte_a = (RangeTblEntry *)list_nth(context->root->parse->rtable,var->varno-1);
+			rte_b = (RangeTblEntry *)list_nth(context->root->parse->rtable,var->varno-2);
+			if (rte_a->rtekind == RTE_BEFORE && rte_b->rtekind == RTE_BEFORE) var->varno-=1;
+		}
 
 		/* First look for the var in the input tlists */
 		newvar = search_indexed_tlist_for_var(var,
@@ -1892,6 +1898,37 @@ fix_upper_expr_mutator(Node *node, fix_upper_expr_context *context)
  *
  * Note: resultRelation is not yet adjusted by rtoffset.
  */
+
+void fix_varno_varattno(List *rlist, int begin, int bef, int aft){
+	ListCell   *temp;
+	ListCell   *temp2;
+	Var *var;
+	foreach(temp, rlist){
+		TargetEntry *tle = (TargetEntry *)lfirst(temp);
+
+		if(IsA(tle, TargetEntry)){
+			var = (Var*)tle->expr;
+		}else if(IsA(tle, Var)) var=(Var*)tle;
+		if( IsA(var, Var) ){
+			if(var->varnoold == bef){
+var->varno = OUTER_VAR;
+var->varattno = var->varoattno + begin;
+			}
+			else if(var->varnoold == aft)
+			{
+ var->varno = OUTER_VAR;
+ var->varattno = var->varoattno;
+			}
+		}
+		else if( IsA(var, OpExpr )){
+			fix_varno_varattno(((OpExpr*)var)->args, begin, bef, aft);
+		}
+		else if( IsA(var, FuncExpr )){
+			fix_varno_varattno(((FuncExpr*)var)->args, begin, bef, aft);
+		}
+	}
+}
+
 static List *
 set_returning_clause_references(PlannerInfo *root,
 List *rlist,
@@ -1900,7 +1937,48 @@ set_returning_clause_references(PlannerInfo *root,
 int rtoffset)
 {
 	indexed_tlist *itlist;
+	int before_index=0, after_index=0, var_index=0;
+	Query  *parse = root->parse;
+
+	ListCell   *rt;
+	RangeTblEntry *b

Re: [HACKERS] strange IS NULL behaviour

2013-07-04 Thread Alvaro Herrera
Tom Lane wrote:

> My recollection of the previous discussion is that we didn't have
> consensus on what the "right" behavior is, so I'm not sure you can just
> assert that this patch is right.  In any case this is only touching the
> tip of the iceberg.  If we intend that rows of nulls should be null,
> then we have got issues with, for example, NOT NULL column constraint
> checks, which don't have any such recursion built into them.

FWIW if changing the behavior of NOT NULL constraints is desired, I
still have the patch to catalogue them around, if anyone wants to play
around.  I haven't gotten around to finishing it up, yet :-(

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strange IS NULL behaviour

2013-07-04 Thread Tom Lane
Bruce Momjian  writes:
> I developed the attached patch which properly recurses into ROW()
> records checking for NULLs;  you can see it returns the right answer in
> all cases (and constant folds too):

My recollection of the previous discussion is that we didn't have
consensus on what the "right" behavior is, so I'm not sure you can just
assert that this patch is right.  In any case this is only touching the
tip of the iceberg.  If we intend that rows of nulls should be null,
then we have got issues with, for example, NOT NULL column constraint
checks, which don't have any such recursion built into them.  I think
the same is true for plpgsql variable NOT NULL restrictions, and there
are probably some other places.

> The optimizer seems like the right place to fix this, per my patch.

No, it isn't, or at least it's far from the only place.  If we're going
to change this, we would also want to change the behavior of tests on
RECORD values, which is something that would have to happen at runtime.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] big test separation POC

2013-07-04 Thread Fabien COELHO



+serial_%: parallel_%
+   echo "# this file is generated automatically, do not edit!" > $@
+   egrep '^(test|ignore):' $< | \
+   while read op list ; do \
+ for test in $$list ; do \
+   echo "$$op $$test" ; \
+ done ; \
+   done >> $@
+


This won't work on windows all that easily.


Hmmm. I made the assumption that a system with "gnu make" would also have 
a shell and basic unix commands available, but it is possibly quite naive.


ISTM that there are perl scripts used elsewhere for building postgresql. 
Would assuming that perl is available be admissible?



Maybe we should instead add a "--run-serially" parameter to pg_regress?


I guess that it is quite possible and easy to do. I'm not sure whether it 
is desirable.



-installcheck: all tablespace-setup
-   $(pg_regress_installcheck) $(REGRESS_OPTS) 
--schedule=$(srcdir)/serial_schedule $(EXTRA_TESTS)
+# after installation, serial
+installcheck: all tablespace-setup serial_schedule
+   $(pg_regress_installcheck) $(REGRESS_OPTS) \
+ --schedule=serial_schedule $(EXTRA_TESTS)


Why do we use the serial schedule for installcheck anyway? Just because
of max_connections?


I asked myself the same question, and found no obvious answer. Maybe if 
a check is run against an installed version, it is assumed that postgresql 
is being used so the database should not be put under heavy load?


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for REINDEX CONCURRENTLY

2013-07-04 Thread Fujii Masao
On Thu, Jul 4, 2013 at 3:38 PM, Michael Paquier
 wrote:
> Hi,
>
> I noticed some errors in the comments of the patch committed. Please
> find attached a patch to correct that.

Committed. Thanks!

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Patch to compute Max LSN of Data Pages

2013-07-04 Thread Robert Haas
This looks better.

+   fprintf(stderr, _("%s: .. file \"%s\" for seeking: %s\n"),
+   progname, filename, strerror(errno));

Weird error message style - what's with the ".."?

+   fprintf(stderr, _("%s: .. file \"%s\" length is more than 
segment
size: %d.\n"),
+   progname, filename, RELSEG_SIZE);

Again.

+   fprintf(stderr, _("%s: could not seek to next page  
\"%s\": %s\n"),
+   progname, filename, strerror(errno));

I think this should be written as: could not seek to offset NUMBER in
file "PATH"

+   fprintf(stderr, _("%s: could not read file \"%s\": 
%s\n"),
+   progname, filename, strerror(errno));

And this one as: could not read file "PATH" at offset NUMBER

+   printf("File:%s Maximum LSN found is: %X/%X \nWAL segment file
name (fileid,seg): %X/%X\n",

I think that we don't need to display the WAL segment file name for
the per-file progress updates.  Instead, let's show the block number
where that LSN was found, like this:

Highest LSN for file "%s" is %X/%X in block %u.

The overall output can stay as you have it.

+   if (0 != result)

Coding style.

+ static int
+ FindMaxLSNinFile(char *filename, XLogRecPtr *maxlsn)

It seems that this function, and a few others, use -1 for a failure
return, 0 for success, and all others undefined.  Although this is a
defensible choice, I think it would be more PG-like to make the return
value bool and use true for success and false for failure.

+   if (S_ISREG(statbuf.st_mode))
+   (void) FindMaxLSNinFile(pathbuf, maxlsn);
+   else
+   (void) FindMaxLSNinDir(pathbuf, maxlsn);

I don't see why we're throwing away the return value here.  I would
expect the function to have a "bool result = true" at the top and sent
result = false if one of these functions returns false.  At the end,
it returns result.

+This utility can only be run by the user who installed the server, because
+it requires read/write access to the data directory.

False.

+   LsnSearchPath = argv[optind];
+
+   if (LsnSearchPath == NULL)
+   LsnSearchPath = getenv("PGDATA");

I think you should write the code so that the tool loops over its
input arguments; if none, it processes $PGDATA.  (Don't forget to
update the syntax synopsis and documentation to match.)

I think the documentation should say something about the intended uses
of this tool, including cautions against using it for things to which
it is not well-suited.  I guess the threshold question for this patch
is whether those uses are enough to justify including the tool in the
core distribution.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strange IS NULL behaviour

2013-07-04 Thread Bruce Momjian
On Mon, Nov 26, 2012 at 09:29:19PM +0100, Hannu Krosing wrote:
> On 11/26/2012 09:05 PM, Tom Lane wrote:
> >Hannu Krosing  writes:
> >>In some previous mail Tom Lane claimed that by SQL standard
> >>either an array of all NULLs or a record with all fields NULLs (I
> >>don't remember which) is also considered NULL. If this is true,
> >>then an empty array - which can be said to consist of nothing
> >>but NULLs - should itself be NULL.
> >What I think you're referring to is that the spec says that "foo IS
> >NULL" should return true if foo is a record containing only null fields.
> Is this requirement recursive ?
> 
> That is , should
> 
> ROW(NULL, NULL, ROW(NULL, ROW(NULL, NULL))) IS NULL
> also be true ?
> 
> Currently PostgreSQL does this kind of IS NULL for "simple" rows
> 
> hannu=# SELECT ROW(NULL, NULL) IS NULL;
>  ?column?
> --
>  t
> (1 row)
> 
> and also for first level row types
> 
> hannu=# SELECT ROW(NULL, ROW(NULL, NULL)) IS NULL;
>  ?column?
> --
>  t
> (1 row)
> 
> but then mysteriously stops working at third level
> 
> hannu=# SELECT ROW(NULL, NULL, ROW(NULL, ROW(NULL, NULL))) IS NULL;
>  ?column?
> --
>  f
> (1 row)

I finally had time to look at this, and it is surprising.  I used
EXPLAIN VERBOSE to see what the optimizer was outputting:

EXPLAIN VERBOSE SELECT ROW(null) IS NULL;
QUERY PLAN
--
 Result  (cost=0.00..0.01 rows=1 width=0)
-->Output: true

EXPLAIN VERBOSE SELECT ROW(ROW(null)) IS NULL;
QUERY PLAN
--
 Result  (cost=0.00..0.01 rows=1 width=0)
-->Output: (ROW(NULL::unknown) IS NULL)

EXPLAIN VERBOSE SELECT ROW(ROW(ROW(null))) IS NULL;
 QUERY PLAN
-
 Result  (cost=0.00..0.01 rows=1 width=0)
-->Output: (ROW(ROW(NULL::unknown)) IS NULL)

The first test outputs a constant, 'true'.  The second test is
ROW(NULL::unknown) because the inner ROW(NULL) was converted to
NULL:unknown.  The third one, which returns false (wrong), happens
because you have ROW embedded in ROW, which the optimizer can't process,
and the executor can't either.

I developed the attached patch which properly recurses into ROW()
records checking for NULLs;  you can see it returns the right answer in
all cases (and constant folds too):

test=> EXPLAIN VERBOSE SELECT ROW(null) IS NULL;
QUERY PLAN
--
 Result  (cost=0.00..0.01 rows=1 width=0)
-->Output: true
(2 rows)

test=> EXPLAIN VERBOSE SELECT ROW(ROW(null)) IS NULL;
QUERY PLAN
--
 Result  (cost=0.00..0.01 rows=1 width=0)
-->Output: true
(2 rows)

test=> EXPLAIN VERBOSE SELECT ROW(ROW(ROW(null))) IS NULL;
QUERY PLAN
--
 Result  (cost=0.00..0.01 rows=1 width=0)
-->Output: true
(2 rows)

You might think the problem is only with constants, but it extends to
column values too (non-patched server):

CREATE TABLE test (x INT);
CREATE TABLE

INSERT INTO test VALUES (1), (NULL);
INSERT 0 2

SELECT ROW(x) IS NULL FROM test;
 ?column?
--
 f
 t

SELECT ROW(ROW(x)) IS NULL FROM test;
 ?column?
--
 f
 t

SELECT ROW(ROW(ROW(x))) IS NULL FROM test;
 ?column?
--
-->  f
-->  f

With the patch, that works too:

SELECT ROW(ROW(ROW(x))) IS NULL FROM test;
 ?column?
--
 f
 t

The optimizer seems like the right place to fix this, per my patch.  It
already flattens IS NULL tests into a series of AND clauses, and now by
recursing, it handles nested ROW values properly too.

This fix would be for head only.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/src/backend/optimizer/util/clauses.c b/src/backend/optimizer/util/clauses.c
new file mode 100644
index 6d5b204..0abf789
*** a/src/backend/optimizer/util/clauses.c
--- b/src/backend/optimizer/util/clauses.c
*** eval_const_expressions_mutator(Node *nod
*** 3149,3155 
  		newntest->arg = (Expr *) relem;
  		newntest->nulltesttype = ntest->nulltesttype;
  		newntest->argisrow = type_is_rowtype(exprType(relem));
! 		newargs = lappend(newargs, newntest);
  	}
  	/* If all the inputs were constants, result is TRUE */
  	if (newargs == NIL)
--- 3149,3160 
  		newntest->arg = (Expr *) relem;
  		

Re: [HACKERS] LATERAL quals revisited

2013-07-04 Thread Antonin Houska

On 07/03/2013 08:32 PM, Tom Lane wrote:

Another possibility would be to keep the optimization, but disable it in
queries that use LATERAL.  I don't much care for that though --- seems
too Rube Goldbergish, and in any case I have a lot less faith in the
whole concept now than I had before I started digging into this issue.

Thoughts?

I noticed EXPLAIN in some regression tests. So if they all pass after 
removal of this optimization, it might indicate that it was really 
insignificant. But alternatively it may just be a lack of focus on this 
feature in the test queries. Digging for (non-LATERAL) queries or rather 
patterns where the ph_may_need optimization clearly appears to be 
important sounds to me like a good SQL exercise, but I'm afraid I won't 
have time for it in the next few days.



//Antonin Houska (Tony)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Patch to compute Max LSN of Data Pages

2013-07-04 Thread Robert Haas
On Thu, Jul 4, 2013 at 2:14 AM, Amit Kapila  wrote:
> It can cause error "too many levels of symbolic links"

Sure, so you report the error and exit.  No problem.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] request a new feature in fuzzystrmatch

2013-07-04 Thread Robert Haas
On Thu, Jul 4, 2013 at 3:59 AM, Michael Paquier
 wrote:
> 5) Your code copies a function from TOMOYO Linux, which is under GPL2
> license, so I believe that this cannot be integrated to Postgres which
> is under PostgreSQL license (more permissive). Just based on that some
> portions of this code cannot be included in Postgres, I am not sure
> but this gives a sufficient reason to reject this patch.

We definitely cannot accept any GPL code into PostgreSQL.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-07-04 Thread Noah Misch
On Thu, Jul 04, 2013 at 08:08:57PM +1200, Mark Kirkwood wrote:
> On 04/07/13 10:43, Robert Haas wrote:
>
>> And
>> people who submit patches for review should also review patches: they
>> are asking other people to do work, so they should also contribute
>> work.
>>
>
> I think that is an overly simplistic view of things. People submit  
> patches for a variety of reasons, but typically because they think the  
> patch will make the product better (bugfix or new functionality). This  
> is a contribution in itself, not a debt.

True.  I don't see that policy as a judgment against the value of submissions,
but rather a response ...

> Now reviewing is performed to ensure that submitted code is *actually*  
> going to improve the product.
>
> Both these activities are volunteer work - to attempt to tie them  
> together forceably is unusual to say the least. The skills and  
> experience necessary to review patches are considerably higher than  
> those required to produce patches, hence the topic of this thread.
>
> Now I do understand we have a shortage of reviewers and lots of patches,  

... to this.  Reviewing may be harder than writing a patch, but patch
submitters are more promising as reviewers than any other demographic.  The
situation has a lot in common with the system of academic peer review:
http://en.wikipedia.org/wiki/Peer_review#Scholarly_peer_review

It's a good value for submitters.  By the time my contributions are part of a
release, they've regularly become better than I would have achieved working in
isolation.  Reviewers did that.

> and that this *is* a problem - but what a wonderful problem...many open  
> source projects would love to be in this situation!!!

A database is different from much other software in that users build
intricate, long-lived software of their own against it.  In that respect, it's
like the hardware-independent part of a compiler or an OS kernel.  When we
establish an interface, we maintain it forever or remove it at great user
cost.  It's also different by virtue of managing long-term state, like a
filesystem.  That dramatically elevates the potential cost of a bug.  A
spreadsheet program might reasonably have a different perspective on a surge
of submissions.  For PostgreSQL, figuring out how to review them is key.

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improvement of checkpoint IO scheduler for stable transaction responses

2013-07-04 Thread Joshua D. Drake


On 07/04/2013 06:05 AM, Andres Freund wrote:


Presumably the smaller segsize is better because we don't
completely stall the system by submitting up to 1GB of io at once. So,
if we were to do it in 32MB chunks and then do a final fsync()
afterwards we might get most of the benefits.

Yes, I try to test this setting './configure --with-segsize=0.03125' tonight.
I will send you this test result tomorrow.




I did testing on this a few years ago, I tried with 2MB segments over 
16MB thinking similarly to you. It failed miserably, performance 
completely tanked.


JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improvement of checkpoint IO scheduler for stable transaction responses

2013-07-04 Thread Tom Lane
Andres Freund  writes:
> I don't like going in this direction at all:
> 1) it breaks pg_upgrade. Which means many of the bigger users won't be
>able to migrate to this and most packagers would carry the old
>segsize around forever.
>Even if we could get pg_upgrade to split files accordingly link mode
>would still be broken.

TBH, I think *any* rearrangement of the on-disk storage files is going
to be rejected.  It seems very unlikely to me that you could demonstrate
a checkpoint performance improvement from that that occurs consistently
across different platforms and filesystems.  And as Andres points out,
the pain associated with it is going to be bad enough that a very high
bar will be set on whether you've proven the change is worthwhile.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Mention in bgworker docs that db connection needs shmem access

2013-07-04 Thread Robert Haas
On Thu, Jul 4, 2013 at 2:01 AM, Michael Paquier
 wrote:
> Hi all,
>
> The bgworker documentation does not explicitely mention that a
> bgworker using BGWORKER_BACKEND_DATABASE_CONNECTION needs to have
> BGWORKER_SHMEM_ACCESS set as well.
>
> Just to mention, a bgworker using only db connection flag and not
> shmem flag will fail at atart-up with this error.
> background worker "hello signal worker": must attach to shared memory
> in order to request a database connection
>
> Please find attached a patch that improves documentation a bit by
> mentioning this restriction. This should normally be backpatched to
> 9.3 stable.

Done.  Thanks.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] in-catalog Extension Scripts and Control parameters (templates?)

2013-07-04 Thread Jaime Casanova
On Thu, Jul 4, 2013 at 7:32 AM, Dimitri Fontaine  wrote:
>
>> - In alter.c you made AlterObjectRename_internal non static and
>> replaced a SearchSysCache1 call with a get_catalog_object_by_oid one.
>> Now, in its comment that function says that is for simple cases. And
>> because of the things you're doing it seems to me this isn't a simple
>> case. Maybe instead of modifying it you should create other function
>> RenameExtensionTemplateInternal, just like tablecmd.c does?
>
> The get_catalog_object_by_oid() is doing a SearchSysCache1 when that's
> relevant and possible, so I don't think I changed enough things around
> to warrant a different API. I'm open to hearing about why I'm wrong if
> that's the case, though.
>

not sure if you're wrong. but at the very least, you miss a
heap_freetuple(oldtup) there, because get_catalog_object_by_oid()

>
>> - extension.c: In function ‘get_ext_ver_list_from_catalog’:
>> extension.c:1150:25: warning: variable ‘evi’ set but not used
>> [-Wunused-but-set-variable]
>
> I don't have the warning here, and that code is unchanged from master's
> branch, only the name of the function did change. Do you have the same
> warning with master? which version of gcc are you using?
>

no, that code is not unchanged because function
get_ext_ver_list_from_catalog() comes from your patch.
it's seems the thing is that function get_ext_ver_info() is append a
new ExtensionVersionInfo which is then returned and assigned to an
*evi pointer that is never used.

i'm sure that evi in line 1150 is only because you need to receive the
returned value. Maybe you could use "(void) get_ext_ver_info()" (or it
should be (void *)?) to avoid that assignment and keep compiler quiet

PS: i'm on gcc (Debian 4.7.2-5) 4.7.2

--
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
Phone: +593 4 5107566 Cell: +593 987171157


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] event trigger API documentation?

2013-07-04 Thread Peter Eisentraut
On Thu, 2013-07-04 at 10:15 +0200, Dimitri Fontaine wrote:
> There's a typo I made that I see only now:
> 
>   + 
>   +  tg_event

Fixed.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Grouping Sets

2013-07-04 Thread Atri Sharma
On Thu, Jul 4, 2013 at 6:56 PM, Dev Kumkar  wrote:
> On Thu, Jul 4, 2013 at 6:31 PM, Pavel Stehule 
> wrote:
>>
>> Hello
>>
>>
>> I don't work on this topic now, and my code is not usable for production.
>
>
> Ok, no problem. Will await for any other pointers regarding any related
> patch here.
>
> Currently using UNION to archive similar results but looking if anything is
> already done here.
>
> Looks like GROUPING SET was in the TODO list long back.
> http://grokbase.com/t/postgresql/pgsql-general/06aaa4g7cq/cube-rollup-grouping-sets
>
> Am I missing anything here?
>
> Regards...

Me and RhodiumToad discussed the idea recently, after David Fetter
suggested that we work on it. We may start work on it soon, haven't
thought in detail yet though.
--
Regards,

Atri
l'apprenant


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.3 bug fix] ECPG does not escape backslashes

2013-07-04 Thread Michael Meskes
On Thu, Jul 04, 2013 at 03:18:12PM +0200, Andres Freund wrote:
> Previous versions of CPP did not interpret escapes in ‘#line’; we have
> changed it because the standard requires they be interpreted, and most
> other compilers do.«

So that means MauMau was right and backslashes have to be escaped in filenames
in #line directives, right? Apparently my examples were badly chosen as I
didn't see an error no matter how many backslashes I had.

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at gmail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Grouping Sets

2013-07-04 Thread Dev Kumkar
On Thu, Jul 4, 2013 at 6:31 PM, Pavel Stehule wrote:

> Hello
>
> I don't work on this topic now, and my code is not usable for production.


Ok, no problem. Will await for any other pointers regarding any related
patch here.

Currently using UNION to archive similar results but looking if anything is
already done here.

Looks like GROUPING SET was in the TODO list long back.
http://grokbase.com/t/postgresql/pgsql-general/06aaa4g7cq/cube-rollup-grouping-sets

Am I missing anything here?

Regards...


Re: [HACKERS] Bugfix and new feature for PGXS

2013-07-04 Thread Andrew Dunstan


On 07/04/2013 09:14 AM, Cédric Villemain wrote:


> > I'm not sure whether this is as simple as changing $< to $^ in the

> > pgxs.mk's installcontrol rule, or if something more is required.

> > Could you take a look?

> >

>

> will do.

ah yes, good catch, I though .control file were unique per contrib, 
but there aren't.





It's already been fixed.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-07-04 Thread Bruce Momjian
On Thu, Jul  4, 2013 at 09:16:22AM -0400, Andrew Dunstan wrote:
> 
> On 07/04/2013 09:09 AM, Bruce Momjian wrote:
> >On Wed, Jul  3, 2013 at 03:34:06PM -0700, Josh Berkus wrote:
> >>On 07/03/2013 03:08 PM, Robert Haas wrote:
> >>>You are way out of line.  You have no right to expect ANYONE to
> >>>participate in patch review and commit.  Michael is doing us a favor
> >>>by maintaining ECPG even though he's not heavily involved in the
> >>>project any more and has other things to do with his time.
> >>That's a good point.  I hadn't considered (or realized the extent of)
> >>the occasional and specific nature of Michael's involvement with the
> >>project these days.  My apologies, then, Michael.
> >>
> >>Is there anyone else on the committer list with similar circumstances?
> >You know what this reminds me of --- early communist movements.  Members
> >were scrutinized to see if they were working hard enough for "the
> >cause", and criticized/shamed/punished if they were not.  The leaders
> >became tyrants, and justified the tyranny because they were creating "a
> >better society".  We all know that didn't end well.
> >
> 
> I think that's way over the top. Can we all just cool down a bit? I
> really don't see Josh as Stalin.

I don't either.  It is the "judging others efforts" that concerns me.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-07-04 Thread Andrew Dunstan


On 07/04/2013 09:09 AM, Bruce Momjian wrote:

On Wed, Jul  3, 2013 at 03:34:06PM -0700, Josh Berkus wrote:

On 07/03/2013 03:08 PM, Robert Haas wrote:

You are way out of line.  You have no right to expect ANYONE to
participate in patch review and commit.  Michael is doing us a favor
by maintaining ECPG even though he's not heavily involved in the
project any more and has other things to do with his time.

That's a good point.  I hadn't considered (or realized the extent of)
the occasional and specific nature of Michael's involvement with the
project these days.  My apologies, then, Michael.

Is there anyone else on the committer list with similar circumstances?

You know what this reminds me of --- early communist movements.  Members
were scrutinized to see if they were working hard enough for "the
cause", and criticized/shamed/punished if they were not.  The leaders
became tyrants, and justified the tyranny because they were creating "a
better society".  We all know that didn't end well.



I think that's way over the top. Can we all just cool down a bit? I 
really don't see Josh as Stalin.


cheers

andrew (who came top of Russian History in his final year)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.3 bug fix] ECPG does not escape backslashes

2013-07-04 Thread Andres Freund
On 2013-07-04 09:12:37 -0400, Andrew Dunstan wrote:
> 
> On 07/04/2013 08:58 AM, Andres Freund wrote:
> >On 2013-07-04 08:50:34 -0400, Andrew Dunstan wrote:
> >>On 07/04/2013 08:31 AM, Michael Meskes wrote:
> >>>On Thu, Jul 04, 2013 at 07:58:39AM -0400, Andrew Dunstan wrote:
> >michael@feivel:~$ grep line testa/init.c |head -1
> >#line 1 "test\\a/init.pgc"
> ...
> 
> Really? I'd expect to see 4 backslashes in the #line directive, I think.
> >>>Eh, why? The four backslashes come are two that are escaped for shell 
> >>>usage.
> >>>The directory name is in my example was "test\\a". What did I miss?
> >>>
> >>Isn't the argument to #line a C string literal in which one would expect
> >>backslashes to be escaped? If not, how would it show a filename containing a
> >>'"' character?
> >>
> >>[andrew@emma inst.92.5701]$ bin/ecpg x\\\"a/y.pgc
> >>[andrew@emma inst.92.5701]$ grep line  x\\\"a/y.c
> >>#line 1 "x\"a/y.pgc"
> >>
> >>This must surely be wrong.
> >I think it's correct. Quoting the gcc manual
> >(http://gcc.gnu.org/onlinedocs/gcc-4.8.1/cpp/Include-Syntax.html#Include-Syntax)
> >"However, if backslashes occur within file, they are considered ordinary
> >text characters, not escape characters. None of the character escape
> >sequences appropriate to string constants in C are processed. Thus,
> >#include "x\n\\y" specifies a filename containing three
> >backslashes. (Some systems interpret ‘\’ as a pathname separator. All of
> >these also interpret ‘/’ the same way. It is most portable to use only
> >‘/’.)"
> 
> Well, that refers to #include, but for the sake of argument I'll assume the
> same rule applies to #line. So this just gets processed by stripping the
> surrounding quotes? Well I guess I learn something every day.

Gah. You're right. I only remembered the rules for #include and thought
that would be applicable. But:
http://gcc.gnu.org/onlinedocs/gcc-4.8.1/cpp/Line-Control.html#Line-Control :
»filename is interpreted according to the normal rules for a string
constant: backslash escapes are interpreted. This is different from
‘#include’.

Previous versions of CPP did not interpret escapes in ‘#line’; we have
changed it because the standard requires they be interpreted, and most
other compilers do.«

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bugfix and new feature for PGXS

2013-07-04 Thread Cédric Villemain
> > I'm not sure whether this is as simple as changing $< to $^ in the
> > pgxs.mk's installcontrol rule, or if something more is required.
> > Could you take a look?
> >
> 
> will do.

ah yes, good catch, I though .control file were unique per contrib, but there 
aren't.
-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation


Re: [HACKERS] [9.3 bug fix] ECPG does not escape backslashes

2013-07-04 Thread Andrew Dunstan


On 07/04/2013 08:58 AM, Andres Freund wrote:

On 2013-07-04 08:50:34 -0400, Andrew Dunstan wrote:

On 07/04/2013 08:31 AM, Michael Meskes wrote:

On Thu, Jul 04, 2013 at 07:58:39AM -0400, Andrew Dunstan wrote:

michael@feivel:~$ grep line testa/init.c |head -1
#line 1 "test\\a/init.pgc"

...

Really? I'd expect to see 4 backslashes in the #line directive, I think.

Eh, why? The four backslashes come are two that are escaped for shell usage.
The directory name is in my example was "test\\a". What did I miss?


Isn't the argument to #line a C string literal in which one would expect
backslashes to be escaped? If not, how would it show a filename containing a
'"' character?

[andrew@emma inst.92.5701]$ bin/ecpg x\\\"a/y.pgc
[andrew@emma inst.92.5701]$ grep line  x\\\"a/y.c
#line 1 "x\"a/y.pgc"

This must surely be wrong.

I think it's correct. Quoting the gcc manual
(http://gcc.gnu.org/onlinedocs/gcc-4.8.1/cpp/Include-Syntax.html#Include-Syntax)
"However, if backslashes occur within file, they are considered ordinary
text characters, not escape characters. None of the character escape
sequences appropriate to string constants in C are processed. Thus,
#include "x\n\\y" specifies a filename containing three
backslashes. (Some systems interpret ‘\’ as a pathname separator. All of
these also interpret ‘/’ the same way. It is most portable to use only
‘/’.)"


Well, that refers to #include, but for the sake of argument I'll assume 
the same rule applies to #line. So this just gets processed by stripping 
the surrounding quotes? Well I guess I learn something every day.



cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Remove useless USE_PGXS support in contrib

2013-07-04 Thread Cédric Villemain
Le mercredi 3 juillet 2013 23:56:42, Josh Berkus a écrit :
> Peter, Cedric, etc.:
> 
> Where are we on this patch?  Seems like discussion died out.  Should it
> be bounced?

I for myself have been presuaded that it is a good idea. Things apparently 
loosed are not, it just outline that we need better coverage in PGXS area, so 
it is another thing to consider (TODO : add resgress test for PGXS ...)
-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-07-04 Thread Bruce Momjian
On Wed, Jul  3, 2013 at 03:34:06PM -0700, Josh Berkus wrote:
> On 07/03/2013 03:08 PM, Robert Haas wrote:
> > You are way out of line.  You have no right to expect ANYONE to
> > participate in patch review and commit.  Michael is doing us a favor
> > by maintaining ECPG even though he's not heavily involved in the
> > project any more and has other things to do with his time.
> 
> That's a good point.  I hadn't considered (or realized the extent of)
> the occasional and specific nature of Michael's involvement with the
> project these days.  My apologies, then, Michael.
> 
> Is there anyone else on the committer list with similar circumstances?

You know what this reminds me of --- early communist movements.  Members
were scrutinized to see if they were working hard enough for "the
cause", and criticized/shamed/punished if they were not.  The leaders
became tyrants, and justified the tyranny because they were creating "a
better society".  We all know that didn't end well.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improvement of checkpoint IO scheduler for stable transaction responses

2013-07-04 Thread Andres Freund
On 2013-07-04 21:28:11 +0900, KONDO Mitsumasa wrote:
> >That would move all the vm and fsm forks to separate directories,
> >which would cut down the number of files in the main-fork directory
> >significantly.  That might be worth doing independently of the issue
> >you're raising here.  For large clusters, you'd even want one more
> >level to keep the directories from getting too big:
> >
> >base/${DBOID}/${FORK}/${X}/${RELFILENODE}
> >
> >...where ${X} is two hex digits, maybe just the low 16 bits of the
> >relfilenode number.  But this would be not as good for small clusters
> >where you'd end up with oodles of little-tiny directories, and I'm not
> >sure it'd be practical to smoothly fail over from one system to the
> >other.
> It seems good idea! In generally, base directory was not seen by user.
> So it should be more efficient arrangement for performance and adopt for
> large database.
>
> > Presumably the smaller segsize is better because we don't
> > completely stall the system by submitting up to 1GB of io at once. So,
> > if we were to do it in 32MB chunks and then do a final fsync()
> > afterwards we might get most of the benefits.
> Yes, I try to test this setting './configure --with-segsize=0.03125' tonight.
> I will send you this test result tomorrow.

I don't like going in this direction at all:
1) it breaks pg_upgrade. Which means many of the bigger users won't be
   able to migrate to this and most packagers would carry the old
   segsize around forever.
   Even if we could get pg_upgrade to split files accordingly link mode
   would still be broken.
2) It drastically increases the amount of file handles neccessary and by
   extension increases the amount of open/close calls. Those aren't all
   that cheap. And it increases metadata traffic since mtime/atime are
   kept for more files. Also, file creation is rather expensive since it
   requires metadata transaction on the filesystem level.
3) It breaks readahead since that usually only works within a single
   file. I am pretty sure that this will significantly slow down
   uncached sequential reads on larger tables.

> (2013/07/03 22:39), Andres Freund wrote:> On 2013-07-03 17:18:29 +0900
> > Hm. I wonder how much of this could be gained by doing a
> > sync_file_range(SYNC_FILE_RANGE_WRITE) (or similar) either while doing
> > the original checkpoint-pass through the buffers or when fsyncing the
> > files.
> Sync_file_rage system call is interesting. But it was supported only by
> Linux kernel 2.6.22 or later. In postgresql, it will suits Robert's idea
> which does not depend on kind of OS.

Well. But it can be implemented without breaking things... Even if we
don't have sync_file_range() we can cope by simply doing fsync()s more
frequently. For every open file keep track of the amount of buffers
dirtied and every 32MB or so issue an fdatasync()/fsync().

> I think that best way to write buffers in checkpoint is sorted by buffer's
> FD and block-number with small segsize setting and each property sleep
> times. It will realize genuine sorted checkpint with sequential disk
> writing!

That would mke regular fdatasync()ing even easier.

Greetings,

Andres Freund

--
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Grouping Sets

2013-07-04 Thread Pavel Stehule
Hello

2013/7/4 Dev Kumkar :
> Hello,
>
> Am looking for the patch related to 'Implementation of GROUPING SETS'.
> Where can get this from?
>
> Related thread:
> http://www.postgresql.org/message-id/162867790905121420p7c910054x24d8e327abd58...@mail.gmail.com
>

I don't work on this topic now, and my code is not usable for production.

Regards

Pavel Stehule

> Regards...


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.3 bug fix] ECPG does not escape backslashes

2013-07-04 Thread Andres Freund
On 2013-07-04 08:50:34 -0400, Andrew Dunstan wrote:
> 
> On 07/04/2013 08:31 AM, Michael Meskes wrote:
> >On Thu, Jul 04, 2013 at 07:58:39AM -0400, Andrew Dunstan wrote:
> >>>michael@feivel:~$ grep line testa/init.c |head -1
> >>>#line 1 "test\\a/init.pgc"
> >>...
> >>
> >>Really? I'd expect to see 4 backslashes in the #line directive, I think.
> >Eh, why? The four backslashes come are two that are escaped for shell usage.
> >The directory name is in my example was "test\\a". What did I miss?
> >
> 
> Isn't the argument to #line a C string literal in which one would expect
> backslashes to be escaped? If not, how would it show a filename containing a
> '"' character?
> 
>[andrew@emma inst.92.5701]$ bin/ecpg x\\\"a/y.pgc
>[andrew@emma inst.92.5701]$ grep line  x\\\"a/y.c
>#line 1 "x\"a/y.pgc"
> 
> This must surely be wrong.

I think it's correct. Quoting the gcc manual
(http://gcc.gnu.org/onlinedocs/gcc-4.8.1/cpp/Include-Syntax.html#Include-Syntax)
"However, if backslashes occur within file, they are considered ordinary
text characters, not escape characters. None of the character escape
sequences appropriate to string constants in C are processed. Thus,
#include "x\n\\y" specifies a filename containing three
backslashes. (Some systems interpret ‘\’ as a pathname separator. All of
these also interpret ‘/’ the same way. It is most portable to use only
‘/’.)"

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.3 bug fix] ECPG does not escape backslashes

2013-07-04 Thread Andrew Dunstan


On 07/04/2013 08:31 AM, Michael Meskes wrote:

On Thu, Jul 04, 2013 at 07:58:39AM -0400, Andrew Dunstan wrote:

michael@feivel:~$ grep line testa/init.c |head -1
#line 1 "test\\a/init.pgc"

...

Really? I'd expect to see 4 backslashes in the #line directive, I think.

Eh, why? The four backslashes come are two that are escaped for shell usage.
The directory name is in my example was "test\\a". What did I miss?



Isn't the argument to #line a C string literal in which one would expect 
backslashes to be escaped? If not, how would it show a filename 
containing a '"' character?


   [andrew@emma inst.92.5701]$ bin/ecpg x\\\"a/y.pgc
   [andrew@emma inst.92.5701]$ grep line  x\\\"a/y.c
   #line 1 "x\"a/y.pgc"

This must surely be wrong.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Grouping Sets

2013-07-04 Thread Dev Kumkar
 Hello,

Am looking for the patch related to 'Implementation of GROUPING SETS'.
Where can get this from?

Related thread:
http://www.postgresql.org/message-id/162867790905121420p7c910054x24d8e327abd58...@mail.gmail.com

Regards...


Re: [HACKERS] in-catalog Extension Scripts and Control parameters (templates?)

2013-07-04 Thread Dimitri Fontaine
Hi,

Please find attached version 8 of the patch, with fixes for almost all
reported problems. Thanks a lot to you reviewers for finding them!

I need some help with:

  - toast tables for new catalog tables
  - extension.c:1150:25: warning: variable ‘evi’ set but not used

See details below.

Hitoshi Harada  writes:
> - Why do we need with() clause even if I don't need it?

Not needed anymore, regression test addded to cover the new grammar form.

> foo=# create template for extension ex2 version '1.0' with (requires ex1)
> as $$ $$;
> ERROR:  template option "requires" takes an argument

Not sure how to handle the grammar for that case, given that I'm using
the same tricks as in the CREATE ROLE options in order to avoid adding
new keywords in the patch.

> - create template ex2, create extension ex2, alter template ex2 rename to
> ex3, create extension ex3, drop template ex3;
> foo=# drop template for extension ex3 version '1.0';
> ERROR:  cannot drop unrecognized object 3179 16429 0 because other objects
> depend on it

Fixed in the attached.

> - a template that is created in another template script does not appear to
> depend on the parent template.

What I understand you meant is when doing CREATE TEMPLATE FOR EXTENSION
from within a CREATE EXTENSION script. That is now covered in the
attached.

> - Without control file/template, attempt to create a new extension gives:
> foo=# create extension plv8;
> ERROR:  extension "plv8" has no default control template
> but can it be better, like "extension plv8 has no default control file or
> template"?

Updated the error message, it now looks like that:

create extension plv8;
ERROR:  42704: Extension "plv8" is not available from 
"/Users/dim/pgsql/ddl/share/extension" nor as a template
LOCATION:  read_extension_control, extension.c:676
STATEMENT:  create extension plv8;

> - Looks like both tables don't have toast tables.  Some experiment gives:
> ERROR:  row is too big: size 8472, maximum size 8160

Not sure why. That's not fixed in the attached.

Alvaro Herrera  writes:
> Very minor comment here: these SGML "id" tags:
>
> + 

Changed to SQL-ALTER-TEMPLATE-FOR-EXTENSION, same with CREATE and DROP
commands, update the cross references.

Alvaro Herrera  writes:
> What if read_extension_control_file() fails because of an out-of-memory
> error?  I think you need to extend that function to have a more useful
> API, not rely on it raising a specific error.  There is at least one
> more case when you're calling that function in the same way.

Good point. I'm now using something really simple:

if (access(get_extension_control_filename(extname), F_OK) == 0)
{
ereport(ERROR,
(errcode(ERRCODE_DUPLICATE_OBJECT),
 errmsg("extension \"%s\" is already 
available", extname)));
}

After pondering about it for a while, that doesn't strike me as a
modularity violation severe enough to warrant more complex changes.

Jaime Casanova  writes:
> - The error message in aclchk.c:5175 isn't very clear, i mean the user
> should  see something better than "uptmpl"

Fixed in the attached.

> - In alter.c you made AlterObjectRename_internal non static and
> replaced a SearchSysCache1 call with a get_catalog_object_by_oid one.
> Now, in its comment that function says that is for simple cases. And
> because of the things you're doing it seems to me this isn't a simple
> case. Maybe instead of modifying it you should create other function
> RenameExtensionTemplateInternal, just like tablecmd.c does?

The get_catalog_object_by_oid() is doing a SearchSysCache1 when that's
relevant and possible, so I don't think I changed enough things around
to warrant a different API. I'm open to hearing about why I'm wrong if
that's the case, though.

> - This is a typo i guess: AtlerExtensionTemplateRename

Fixed in the attached.

> - In event_triggers.c, it seems the intention was to keep the
> event_trigger_support array in order, any reason to for not doing it?

Fixed in the attached.

> - extension.c: In function ‘get_ext_ver_list_from_catalog’:
> extension.c:1150:25: warning: variable ‘evi’ set but not used
> [-Wunused-but-set-variable]

I don't have the warning here, and that code is unchanged from master's
branch, only the name of the function did change. Do you have the same
warning with master? which version of gcc are you using?

> Actually, what this did was to create an 123 schema and it puts the
> functions there.

There was a fault in the way default values are assigned to auxilliary
control entries in pg_extension_control when creating a template for
updating an extension. That's been fixed in the attached, a a new
regression test has been added.

> In this case only f1() and f3() exists, because the extension is going
> from 1.0 to 2.1. is this expected?

You can use the following SQL statement to debug your upgrade paths, as
you could already with file-based control in

Re: [HACKERS] [9.3 bug fix] ECPG does not escape backslashes

2013-07-04 Thread Michael Meskes
On Thu, Jul 04, 2013 at 07:58:39AM -0400, Andrew Dunstan wrote:
> >michael@feivel:~$ grep line testa/init.c |head -1
> >#line 1 "test\\a/init.pgc"
> ...
> 
> Really? I'd expect to see 4 backslashes in the #line directive, I think.

Eh, why? The four backslashes come are two that are escaped for shell usage.
The directory name is in my example was "test\\a". What did I miss?

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at gmail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improvement of checkpoint IO scheduler for stable transaction responses

2013-07-04 Thread KONDO Mitsumasa

(2013/07/03 22:31), Robert Haas wrote:

On Wed, Jul 3, 2013 at 4:18 AM, KONDO Mitsumasa
 wrote:

I tested and changed segsize=0.25GB which is max partitioned table file size and
default setting is 1GB in configure option (./configure --with-segsize=0.25).
Because I thought that small segsize is good for fsync phase and background disk
write in OS in checkpoint. I got significant improvements in DBT-2 result!


This is interesting.  Unfortunately, it has a significant downside:
potentially, there will be a lot more files in the data directory.  As
it is, the number of files that exist there today has caused
performance problems for some of our customers.  I'm not sure off-hand
to what degree those problems have been related to overall inode
consumption vs. the number of files in the same directory.
Did you change number of max FD per process in kernel parameter? In default 
setting, number of max FD per process is 1024. I think that it might over limit 
in 500GB class database. Or, this problem might be caused by _mdfd_getseg() at 
md.c. In write phase, dirty buffers don't have own FD. Therefore they seek to 
find own FD and check the file in each dirty buffer. I think it is safe file 
writing, but it might too wasteful. I think that BufferTag should have own FD and 
it will be more efficient in checkpoint writing.



If the problem is mainly with number of of files in the same
directory, we could consider revising our directory layout.  Instead
of:

base/${DBOID}/${RELFILENODE}_{FORK}

We could have:

base/${DBOID}/${FORK}/${RELFILENODE}

That would move all the vm and fsm forks to separate directories,
which would cut down the number of files in the main-fork directory
significantly.  That might be worth doing independently of the issue
you're raising here.  For large clusters, you'd even want one more
level to keep the directories from getting too big:

base/${DBOID}/${FORK}/${X}/${RELFILENODE}

...where ${X} is two hex digits, maybe just the low 16 bits of the
relfilenode number.  But this would be not as good for small clusters
where you'd end up with oodles of little-tiny directories, and I'm not
sure it'd be practical to smoothly fail over from one system to the
other.

It seems good idea! In generally, base directory was not seen by user.
So it should be more efficient arrangement for performance and adopt for
large database.

(2013/07/03 22:39), Andres Freund wrote:> On 2013-07-03 17:18:29 +0900
> Hm. I wonder how much of this could be gained by doing a
> sync_file_range(SYNC_FILE_RANGE_WRITE) (or similar) either while doing
> the original checkpoint-pass through the buffers or when fsyncing the
> files.
Sync_file_rage system call is interesting. But it was supported only by Linux 
kernel 2.6.22 or later. In postgresql, it will suits Robert's idea which does not 
depend on kind of OS.


> Presumably the smaller segsize is better because we don't
> completely stall the system by submitting up to 1GB of io at once. So,
> if we were to do it in 32MB chunks and then do a final fsync()
> afterwards we might get most of the benefits.
Yes, I try to test this setting './configure --with-segsize=0.03125' tonight.
I will send you this test result tomorrow.

I think that best way to write buffers in checkpoint is sorted by buffer's FD and 
block-number with small segsize setting and each property sleep times. It will 
realize genuine sorted checkpint with sequential disk writing!


Best regards,
--
Mitsumasa KONDO
NTT Open Source Software Center


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.3 bug fix] ECPG does not escape backslashes

2013-07-04 Thread Andrew Dunstan


On 07/04/2013 07:04 AM, Michael Meskes wrote:

On Wed, Jul 03, 2013 at 07:22:48PM +0900, MauMau wrote:

I happened to find a trivial bug of ECPG while experimenting with
9.3 beta 2.  Please find attached the patch to fix this.  This is
not specific to 9.3.  Could you commit and backport this?

This appears to be Windows specific. I don't have a Windows system to test
with. How does Visusal Studio handle #line entries with full path names? Are
they all escaped? Or better do they have to be?


This is necessary not only on Windows but also on UNIX/Linux.  For
your information, running "gcc -E di\\r/a.c" escapes \ and outputs
the line:

# 1 "di\\r/a.c"

Now this statement surprises me:

michael@feivel:~$ ecpg testa/init.pgc
michael@feivel:~$ grep line testa/init.c |head -1
#line 1 "test\\a/init.pgc"
michael@feivel:~$ gcc -o i testa/init.c -I /usr/include/postgresql/ -l ecpg
michael@feivel:~$

This seems to suggest that it works nicely on Linux.



Really? I'd expect to see 4 backslashes in the #line directive, I think.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.3 bug fix] ECPG does not escape backslashes

2013-07-04 Thread Michael Meskes
On Wed, Jul 03, 2013 at 07:22:48PM +0900, MauMau wrote:
> I happened to find a trivial bug of ECPG while experimenting with
> 9.3 beta 2.  Please find attached the patch to fix this.  This is
> not specific to 9.3.  Could you commit and backport this?

This appears to be Windows specific. I don't have a Windows system to test
with. How does Visusal Studio handle #line entries with full path names? Are
they all escaped? Or better do they have to be?

> This is necessary not only on Windows but also on UNIX/Linux.  For
> your information, running "gcc -E di\\r/a.c" escapes \ and outputs
> the line:
> 
> # 1 "di\\r/a.c"

Now this statement surprises me:

michael@feivel:~$ ecpg testa/init.pgc 
michael@feivel:~$ grep line testa/init.c |head -1
#line 1 "test\\a/init.pgc"
michael@feivel:~$ gcc -o i testa/init.c -I /usr/include/postgresql/ -l ecpg
michael@feivel:~$ 

This seems to suggest that it works nicely on Linux. 

So what do ou mean when saying the problem also occurs on Linux?

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at gmail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add an ldapoption to disable chasing LDAP referrals

2013-07-04 Thread Magnus Hagander
On Thu, Jul 4, 2013 at 2:30 AM, James Sewell wrote:

> Heya,
>
> I see what you are saying, the problem as I see it is that the action we
> are taking here is "disable chasing ldap referrals". If the name is
> ldapreferrals and we use a boolean then setting it to 1  reads in a counter
> intuitive manner:
>

That assumes that the default in the ldap library is always going to be to
chase them. Does the standard somehow mandate that it should be?


  "set ldapreferals=true to disable chasing LDAP referrals."
>

You'd obviously reverse the meaning as well. "set ldapreferals=false to
disable chasing LDAP referrals."


Perhaps you are fine with this though if it's documented? It does work in
> the inverse way to pam_ldap, where setting to true enables referral
> chasing. pam_ldap works like so:
>
>   not set  : library default
>   set to 0 : disable referral chasing
>   set to 1 : enable referral chasing
>
>
That is exactly what I'm suggesting it should do, and I'm pretty sure
that's what Peter suggested as well.



-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] in-catalog Extension Scripts and Control parameters (templates?)

2013-07-04 Thread Hitoshi Harada
On Thu, Jul 4, 2013 at 12:46 AM, Jaime Casanova  wrote:
> On Thu, Jul 4, 2013 at 2:42 AM, Jaime Casanova  wrote:
>>
>> create extension test version '123';
>> CREATE EXTENSION
>>
>> postgres=# \df
>>List of functions
>>  Schema | Name | Result data type | Argument data types | Type
>> +--+--+-+--
>> (0 rows)
>>
>> Actually, what this did was to create an 123 schema and it puts the
>> functions there.
>>
>> But that schema is inaccesible and undroppable:
>>
>
> and dropping the extension let the schema around
>

Hm?  I guess '123' is not schema, but it's version.

--
Hitoshi Harada


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add transforms feature

2013-07-04 Thread Hitoshi Harada
On Thu, Jun 13, 2013 at 8:11 PM, Peter Eisentraut  wrote:

> A transform is an SQL object that supplies to functions for converting
> between data types and procedural languages.  For example, a transform
> could arrange that hstore is converted to an appropriate hash or
> dictionary object in PL/Perl or PL/Python.
>
>
The patch applies and regression including contrib passes in my Mac.  I
know I came late, but I have a few questions.


- vs SQL standard
I'm worried about overloading the standard.  As document says, SQL standard
defines CREATE TRANSFORM syntax which is exactly the same as this proposal
but for different purpose.  The standard feature is the data conversion
between client and server side data type, I guess.  I am concerned about it
because in the future if someone wants to implement this SQL standard
feature, there is no way to break thing.  I'd be happy if subsequent clause
was different.  CREATE TYPE has two different syntax, one for composite
type and one for internal user-defined type (I'm not sure either is defined
in the standard, though) and I think we could do something like that.  Or
as someone suggested in the previous thread, it might be a variant of
CAST.  CREATE CAST (hstore AS plpython2u) ?  Or CREATE LANGUAGE TRANSFORM
might sound better.  In either case, I think we are missing the discussion
on the standard overloading.

- dependency loading issue
Although most of the use cases are via CREATE EXTENSION, it is not great to
let users to load the dependency manually.  Is it possible to load
hstore.so and plpython2u.so from _PG_init of hstore_plpython2u?  Because
the author of transform should certainly know the name of shared library in
the database installation, writing down the shared library names in the
init function sounds reasonable.  Or do we still need to consider cases
where plpython2u.so is renamed to something else?

- function types
Although I read the suggestion to use internal type as the argument of
from_sql function, I guess it exposes another security risk.  Since we
don't know what SQL type can safely be passed to the from_sql function, we
cannot check if the function is the right one at the time of CREATE
TRANSFORM.  Non-super user can add his user defined type and own it, and
create a transform that with from_sql function that takes internal and
crashes with this user-defined type.  A possible compromise is let only
super user create transforms, or come up with nice way to allow
func(sql_type) -> internal signature.

- create or replace causes inconsistency
I tried:
  * create transform python to hstore (one way transform)
  * create function f(h hstore) language python
  * create or replace transform hstore to python and python to hstore (both
ways)
  * call f() causes error, since it misses hstore to python transform. It
is probably looking at the old definition

- create func -> create transform is not prohibited
I saw your post in the previous discussion:

> > * I don't think recording dependencies on transforms used when creating
> > functions is a good idea as the transform might get created after the
> > functions already exists. That seems to be a pretty confusing behaviour.
>
> We need the dependencies, because otherwise dropping a transform would
> break or silently alter the behavior of functions that depend on it.
> That sounds like my worst nightmare, thinking of some applications that
> would be affected by that.  But your point is a good one.  I think this
> could be addressed by prohibiting the creation of a transform that
> affects functions that already exist.

However I don't see this prohibition of create transform if there is
already such function.  You are not planning to address this issue?

For now, that's it.  I'm going to dig more later.

Thanks,

Hitoshi Harada


Re: [HACKERS] possible/feasible to specify field and value in error msg?

2013-07-04 Thread Pavel Stehule
Hello

2013/7/4 Willy-Bas Loos :
> On Wed, Jul 3, 2013 at 5:18 PM, Bruce Momjian  wrote:
>>
>> On Wed, Jul  3, 2013 at 11:14:18AM -0400, Bruce Momjian wrote:
>> > We will add optional error details in Postgres 9.3:
>> >
>> >   http://momjian.us/main/blogs/pgblog/2013.html#April_11_2013
>
>
>>
>> I just tested this and it doesn't show the offending column name;
>> sorry:
>>
>> test=> CREATE TABLE test(x smallint);
>> CREATE TABLE
>> test=> \set VERBOSITY verbose
>> test=> INSERT INTO test VALUES (1000);
>> ERROR:  22003: smallint out of range
>> LOCATION:  i4toi2, int.c:349
>>
>
>
> It's great to see that you people care about "userland", judging by the
> effort that you describe in your article.
> In fact you're already doing the thing that i asked about, i see that even
> the offending tuple is printed (which is new).
> And of course it's not necessary to mention the column name when you mention
> the constraint name.
> (BTW: your remark about NOT NULL constraints is not necessary, that error
> message is very clear:"ERROR:  null value in column "balance" violates
> not-null constraint" )
>
> This is not a constraint going off, and in this case, none of that applies.
> But it seems probable to me that some day it will, seeing as you already
> implemented it for constraints.

this functionality will be enhanced in future - but it hardly depends
on current constraint and checks implementation - for some kind of
errors we are not able to join a exception with related column -
typically it is domain errors, probably we can to fill DATATYPE field
in this case.

Regards

Pavel Stehule

>
> Thanks,
>
> Willy-Bas Loos
>
> --
> "Quality comes from focus and clarity of purpose" -- Mark Shuttleworth


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] possible/feasible to specify field and value in error msg?

2013-07-04 Thread Willy-Bas Loos
On Wed, Jul 3, 2013 at 5:18 PM, Bruce Momjian  wrote:

> On Wed, Jul  3, 2013 at 11:14:18AM -0400, Bruce Momjian wrote:
> > We will add optional error details in Postgres 9.3:
> >
> >   http://momjian.us/main/blogs/pgblog/2013.html#April_11_2013
>


> I just tested this and it doesn't show the offending column name;
> sorry:
>
> test=> CREATE TABLE test(x smallint);
> CREATE TABLE
> test=> \set VERBOSITY verbose
> test=> INSERT INTO test VALUES (1000);
> ERROR:  22003: smallint out of range
> LOCATION:  i4toi2, int.c:349
>
>

It's great to see that you people care about "userland", judging by the
effort that you describe in your article.
In fact you're already doing the thing that i asked about, i see that even
the offending tuple is printed (which is new).
And of course it's not necessary to mention the column name when you
mention the constraint name.
(BTW: your remark about NOT NULL constraints is not necessary, that error
message is very clear:"ERROR:  null value in column "balance" violates
not-null constraint" )

This is not a constraint going off, and in this case, none of that applies.
But it seems probable to me that some day it will, seeing as you already
implemented it for constraints.

Thanks,

Willy-Bas Loos

-- 
"Quality comes from focus and clarity of purpose" -- Mark Shuttleworth


Re: [HACKERS] event trigger API documentation?

2013-07-04 Thread Dimitri Fontaine
Peter Eisentraut  writes:
> Applied with some tweaks.

Thanks!

There's a typo I made that I see only now:

  + 
  +  tg_event

I think that should be "event", not "tg_event", because we're listing
the fields for the EventTriggerData structure and not the magic variable
names found in the PLs.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: UNNEST (and other functions) WITH ORDINALITY

2013-07-04 Thread Dean Rasheed
On 4 July 2013 00:08, David Fetter  wrote:
> Patch re-jiggered for recent changes to master.
>

I re-validated this, and it all still looks good, so still ready for
committer IMO.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-07-04 Thread Mark Kirkwood

On 04/07/13 10:43, Robert Haas wrote:


And
people who submit patches for review should also review patches: they
are asking other people to do work, so they should also contribute
work.



I think that is an overly simplistic view of things. People submit 
patches for a variety of reasons, but typically because they think the 
patch will make the product better (bugfix or new functionality). This 
is a contribution in itself, not a debt.


Now reviewing is performed to ensure that submitted code is *actually* 
going to improve the product.


Both these activities are volunteer work - to attempt to tie them 
together forceably is unusual to say the least. The skills and 
experience necessary to review patches are considerably higher than 
those required to produce patches, hence the topic of this thread.


Now I do understand we have a shortage of reviewers and lots of patches, 
and that this *is* a problem - but what a wonderful problem...many open 
source projects would love to be in this situation!!!


Regards

Mark



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Patch to compute Max LSN of Data Pages

2013-07-04 Thread Hari Babu

On Wednesday, July 03, 2013 1:26 AM Robert Haas Wrote:
>On Tue, Jun 25, 2013 at 1:42 PM, Andres Freund 
wrote:
>> I think the usecase for this utility isn't big enough to be included in
>> postgres since it really can only help in a very limited
>> circumstances. And I think it's too likely to be misused for stuff it's
>> not useable for (e.g. remastering).
>>
>> The only scenario I see is that somebody deleted/corrupted
>> pg_controldata. In that scenario the tool is supposed to be used to find
>> the biggest lsn used so far so the user then can use pg_resetxlog to set
>> that as the wal starting point.
>> But that can be way much easier solved by just setting the LSN to
>> something very, very high. The database cannot be used for anything
>> reliable afterwards anyway.

>I guess this is true, but I think I'm mildly in favor of including
>this anyway.  I think I would have used it once or twice, if it had
>been there - maybe not even to feed into pg_resetxlog, but just to
>check for future LSNs.  We don't have anything like a suite of
>diagnostic tools in bin or contrib today, for use by database
>professionals in fixing things that fall strictly in the realm of
>"don't try this at home", and I think we should have such a thing.
>Unfortunately this covers about 0.1% of the space I'd like to see
>covered, which might be a reason to reject it or at least insist that
>it be enhanced first.

>At any rate, I don't think this is anywhere near committable as it
>stands today.  Some random review comments:

Thanks for the detailed review.

>remove_parent_refernces() is spelled wrong.

>Why does this patch need all of this fancy path-handling logic -
>specifically, remove_parent_refernces() and make_absolute_path()?
>Surely its needs are not that different from pg_ctl or pg_resetxlog or
>pg_controldata.  If they all need it and it's duplicated, we should
>fix that.  Otherwise, why the special logic here?

>I don't think we need getLinkPath() either.  The server has no trouble
>finding its files by just using a pathname that follows the symlink.
>Why do we need anything different here?  The whole point of symlinks
>is that you can traverse them transparently, without knowing where
>they point.

Removed the special handling of path functions.

>Duplicating PageHeaderIsValid doesn't seem acceptable.  Moreover,
>since the point of this is to be able to use it on a damaged cluster,
>why is that logic even desirable?  I think we'd be better off assuming
>the pages to be valid.

Corrected.

>The calling convention for this utility seems quite baroque.  There's
>no real need for all of this -p/-P stuff.  I think the syntax should
>just be:

>pg_computemaxlsn file_or_directory...

>For each argument, we determine whether it's a file or a directory.
>If it's a file, we assume it's a PostgreSQL data file and scan it.  If
>it's a directory, we check whether it looks like a data directory.  If
>it does, we recurse through the whole tree structure and find the data
>files, and process them.  If it doesn't look like a data directory, we
>scan each plain file in that directory whose name looks like a
>PostgreSQL data file name.  With this approach, there's no need to
>limit ourselves to a single input argument and no need to specify what
>kind of argument it is; the tool just figures it out.

Changed to accept file or directory without of options.

>I think it would be a good idea to have a mode that prints out the max
>LSN found in *each data file* scanned, and then prints out the overall
>max found at the end.  In fact, I think that should perhaps be the
>default mode, with -q, --quiet to disable it.  When printing out the
>per-file data, I think it would be useful to track and print the block
>number where the highest LSN in that file was found.  I have
>definitely had cases where I suspected, but was not certain of,
>corruption.  One could use a tool like this to hunt for problems, and
>then use something like pg_filedump to dump the offending blocks.
>That would be a lot easier than running pg_filedump on the whole file
>and grepping through the output.

Corrected.

>Similarly, I see no reason for the restriction imposed by
>check_path_belongs_to_pgdata().  I've had people mail me one data
>file; why shouldn't I be able to run this tool on it?  It's a
>read-only utility.

>- if (0 != FindMaxLSNinDir(pathbuf, maxlsn, false)) and similar is not
>PostgreSQL style.

>+   printf(_("%s compute the maximum LSN in PostgreSQL data
>pages.\n\n"), progname);

Fixed.

>+   /*
>+* Don't allow pg_computemaxlsn to be run as root, to avoid
overwriting
>+* the ownership of files in the data directory. We need only check
for
>+* root -- any other user won't have sufficient permissions to
modify
>+* files in the data directory.
>+*/
>+ #ifndef WIN32
>+   if (geteuid() == 0)
>+   {
>+   fprintf(stderr, _("%s: cannot be executed by \"root\".\n"),
>+

Re: [HACKERS] request a new feature in fuzzystrmatch

2013-07-04 Thread Michael Paquier
Hi Liming,

Here is a more formal review of this patch.

First, when submitting a patch, please follow the following guidelines:
http://wiki.postgresql.org/wiki/Submitting_a_Patch
http://wiki.postgresql.org/wiki/Creating_Clean_Patches

Particularly, when you develop a new feature, people will expect that
you submit your patches based on the master branch in git. This
extremely facilitates the review and test work that can be done based
on your work. So do not create a new dedicated project on github like
that => https://github.com/liminghu/fuzzystrmatch, but for example
fork the postgres repository, and work directly on it using for
example custom branches, then generate patches between your custom
branches and postgres master branch.

Also, new features need to be submitted to a commit fest
(https://commitfest.postgresql.org/). The next commit fest is in
September, so you have plenty of time to update your patch based on
the comments of my review (and perhaps comments of others), then
register a patch directly there when you are done.

In order to perform my review, I took your github project and
generated a diff patch. The patch is attached, and applies to master
branch, so you could use it for your future work as a base.

OK, now for the review, here are some comments about the structure of
the patch, which is incorrect based on the structure extensions need
to have.
1) This patch lacks documentation, you shouldn't add a README.md file.
So remove it and updated the dedicated sgml documentation. In your
case, the file to be updated with more details about
Damerau–Levenshtein is doc/src/sgml/fuzzystrmatch.sgml.
2) Remove fuzzystrmatch--1.0.sql, this is not necessary anymore
because this module is upgraded to 1.1
3) Remove fuzzystrmatch--unpackaged--1.1.sql, it is not needed
4) Add fuzzystrmatch--1.0--1.1.sql, which is a file that can be used
to upgrade an existing fuzzystrmatch 1.0 to 1.1. This needs to contain
all the modifications allowing to do a correct upgrade: alter existing
objects to their new shape, add new objects, and remove unnecessary
objects. In your case, this file should have this shape:
/* contrib/fuzzystrmatch/fuzzystrmatch--1.0--1.1.sql */

-- complain if script is sourced in psql, rather than via ALTER EXTENSION
\echo Use "ALTER EXTENSION fuzzystrmatch UPDATE" to load this file. \quit

CREATE FUNCTION damerau_levenstein()...
etc.
5) Your code copies a function from TOMOYO Linux, which is under GPL2
license, so I believe that this cannot be integrated to Postgres which
is under PostgreSQL license (more permissive). Just based on that some
portions of this code cannot be included in Postgres, I am not sure
but this gives a sufficient reason to reject this patch.

This is all I have about the shape of the patch...

Also, I am not (yet) very familiar with Damerau–Levenshtein itself and
I need to read more about that before giving a precise opinion... but
here are some comments anyway based on what I can read from the code:
1) There is a lot of duplicated code among levenshtein.c,
dameraulevenshtein.c and dameraulevenshtein_new.c, why is that?
levenshtein.c refers even to a variable called trans_c which is even
used nowehere.
2) By doing a simple diff between for example levenshtein.c and
dameraulevenshtein.c, the only new thing is the function called
dameraulevenshtein_internal_noncompatible copied from tomoyo linux...
And this function is used nowhere... The new functions for
Damerau–Levenshtein equivalent to levenshtein[_less_equal], called
damaraulevenshtein[_less_equal], are exactly the same things

So, in short, even if I may not be able to give a precise opinion
about Damerau–Levenshtein, what this patch tries to achieve is unclear
to me... The only new feature I can see is
dameraulevenshtein_internal_noncompatible but this is taken from code
licensed as GPL.
--
Michael


20130704_damerau_levenstein.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] in-catalog Extension Scripts and Control parameters (templates?)

2013-07-04 Thread Jaime Casanova
On Thu, Jul 4, 2013 at 2:42 AM, Jaime Casanova  wrote:
>
> create extension test version '123';
> CREATE EXTENSION
>
> postgres=# \df
>List of functions
>  Schema | Name | Result data type | Argument data types | Type
> +--+--+-+--
> (0 rows)
>
> Actually, what this did was to create an 123 schema and it puts the
> functions there.
>
> But that schema is inaccesible and undroppable:
>

and dropping the extension let the schema around

--
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
Phone: +593 4 5107566 Cell: +593 987171157


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] in-catalog Extension Scripts and Control parameters (templates?)

2013-07-04 Thread Jaime Casanova
On Mon, Jun 24, 2013 at 6:20 AM, Dimitri Fontaine
 wrote:
> Jaime Casanova  writes:
>> just tried to build this one, but it doesn't apply cleanly anymore...
>> specially the ColId_or_Sconst contruct in gram.y
>
> Please find attached a new version of the patch, v7, rebased to current
> master tree and with some more cleanup. I've been using the new grammar
> entry NonReservedWord_or_Sconst, I'm not sure about that.
>

Hi,

code review (haven't read all the code)


- The error message in aclchk.c:5175 isn't very clear, i mean the user
should  see something better than "uptmpl"
"""
if (!HeapTupleIsValid(tuple))
   ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
 errmsg("extension uptmpl with OID %u does not exist",
   ext_uptmpl_oid)));
"""

- In alter.c you made AlterObjectRename_internal non static and
replaced a SearchSysCache1 call with a get_catalog_object_by_oid one.
Now, in its comment that function says that is for simple cases. And
because of the things you're doing it seems to me this isn't a simple
case. Maybe instead of modifying it you should create other function
RenameExtensionTemplateInternal, just like tablecmd.c does?

btw, get_catalog_object_by_oid will execute a SearchSysCacheCopy1 so
should be calling heap_freetuple(oldtuple)

- This is a typo i guess: AtlerExtensionTemplateRename

- In event_triggers.c, it seems the intention was to keep the
event_trigger_support array in order, any reason to for not doing it?

- extension.c: In function ‘get_ext_ver_list_from_catalog’:
extension.c:1150:25: warning: variable ‘evi’ set but not used
[-Wunused-but-set-variable]


Functionality
===

i tried this:

create template for extension test version 'abc' with (nosuperuser) as $$
  create function f1(i int) returns int as $_$ select 1; $_$
language sql;
$$;
CREATE TEMPLATE FOR EXTENSION

create template for extension test from 'abc' to 'xyz' with (nosuperuser) as $$
  create function f2(i int) returns int as $_$ select 1; $_$
language sql;
$$;
CREATE TEMPLATE FOR EXTENSION

create template for extension test from 'xyz' to '123' with (nosuperuser) as $$
  create function f3(i int) returns int as $_$ select 1; $_$
language sql;
$$;
CREATE TEMPLATE FOR EXTENSION

create extension test version '123';
CREATE EXTENSION

postgres=# \df
   List of functions
 Schema | Name | Result data type | Argument data types | Type
+--+--+-+--
(0 rows)

Actually, what this did was to create an 123 schema and it puts the
functions there.

But that schema is inaccesible and undroppable:

select * from "123".f1(1);
ERROR:  schema "123" does not exist

drop schema "123";
ERROR:  schema "123" does not exist

--

postgres=# create template for extension test2 version '1.0' with
(nosuperuser) as $$
  create function f1(i int) returns int as $_$ select 1; $_$
language sql;
$$;
CREATE TEMPLATE FOR EXTENSION

postgres=# create template for extension test2 from '1.0' to '1.1'
with (nosuperuser) as $$
  create function f2(i int) returns int as $_$ select 1; $_$
language sql;
$$;
CREATE TEMPLATE FOR EXTENSION

postgres=# create template for extension test2 from '1.0' to '2.1'
with (nosuperuser) as $$
  create function f3(i int) returns int as $_$ select 1; $_$
language sql;
$$;
CREATE TEMPLATE FOR EXTENSION

postgres=# create extension test2 version '2.1';
CREATE EXTENSION


In this case only f1() and f3() exists, because the extension is going
from 1.0 to 2.1. is this expected?
and, yes... the functions are in the schema "2.1"

--
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
Phone: +593 4 5107566 Cell: +593 987171157


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers