[HACKERS] proposal: rounding up time value less than its unit.

2014-07-10 Thread Tomonari Katsumata
Hi,

Several couple weeks ago, I posted a mail to pgsql-doc.
http://www.postgresql.org/message-id/53992ff8.2060...@po.ntts.co.jp

In this thread, I concluded that it's better to
round up the value which is less than its unit.
Current behavior (rounding down) has undesirable setting risk,
because some GUCs have special meaning for 0.

And then I made a patch for this.
Please check the attached patch.

regards,
---
Tomonari Katsumata
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 04ddd73..9aaffb0 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -43,7 +43,9 @@
  are literalms/literal (milliseconds), literals/literal
  (seconds), literalmin/literal (minutes), literalh/literal
  (hours), and literald/literal (days).  Note that the multiplier
- for memory units is 1024, not 1000.
+ for memory units is 1024, not 1000. And also note that if you set
+ less value than which is expected for the time setting, the value
+ would be rounded up.
 /para
 
 para
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 3a31a75..8ba4879 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -5122,9 +5122,11 @@ parse_int(const char *value, int *result, int flags, const char **hintmsg)
 switch (flags  GUC_UNIT_TIME)
 {
 	case GUC_UNIT_S:
+		val += (val  MS_PER_S) ? MS_PER_S : 0;
 		val /= MS_PER_S;
 		break;
 	case GUC_UNIT_MIN:
+		val += (val  MS_PER_MIN) ? MS_PER_MIN : 0;
 		val /= MS_PER_MIN;
 		break;
 }
@@ -5138,6 +5140,7 @@ parse_int(const char *value, int *result, int flags, const char **hintmsg)
 		val *= MS_PER_S;
 		break;
 	case GUC_UNIT_MIN:
+		val += (val  S_PER_MIN) ? S_PER_MIN : 0;
 		val /= S_PER_MIN;
 		break;
 }

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] IMPORT FOREIGN SCHEMA statement

2014-07-10 Thread Ronan Dunklau
Le mercredi 9 juillet 2014 21:30:05 Tom Lane a écrit :
 Michael Paquier michael.paqu...@gmail.com writes:
  I guess that this implementation is enough as a first shot, particularly
  regarding the complexity that default expression import would bring up for
  postgres_fdw (point raised during the review, but not really discussed
  afterwards).
 
 Yeah.  I'm actually more concerned about the lack of collation import,
 but that's unfixable unless we can figure out how to identify collations
 in a platform-independent way.
 
   regards, tom lane

Thank you for working on this patch, I'm not really fond of building strings 
in a FDW for the core to parse them again but I don't see any other solution 
to the problem you mentioned.

Similarly to what we do for the schema, we should also check that the server 
in the parsed statement is the one we are importing from.


-- 
Ronan Dunklau
http://dalibo.com - http://dalibo.org

signature.asc
Description: This is a digitally signed message part.


Re: [HACKERS] pg_receivexlog add synchronous mode

2014-07-10 Thread furuyao
This patch is modified the comment.
Each comment is coping as follows.

 Could you update the status of this patch from Waiting on Author to
 Needs Review when you post the revised version of the patch?
Thank you for pointing out. 
From now on, I will update status when I post the patch.

 +How often should applicationpg_receivexlog/application
 issue sync
 +commands to ensure the received WAL file is safely
 +flushed to disk without being asked by the server to do so.
 
 without being asked by the server to do so implies that the server asks
 pg_receivexlog to flush WAL files periodically?
I think that the sentence means the following.
Without waiting for the feedback request from the server, select is timed out 
and flush is checked. 

Because I cause misunderstanding, I delete a sentence.

  Specifying
 +an interval of literal0/literal together the consecutive
 data.
 
 This text looks corrupted. What does this mean?
 
 +Not specifying an interval disables issuing fsyncs altogether,
 +while still reporting progress the server.
 
 No. Even when the option is not specified, WAL files should be flushed
 at WAL file switch, like current pg_receivexlog does. If you want to
 disable the flush completely, you can change the option so that it accepts
 -1 which means to disable the flush, for example.
Fix to description issuing fsyncs at only  WAL file close. 

 +printf(_(  -F  --fsync-interval=SECS\n
 +  frequency of syncs to the
 output file (default: file close only)\n));
 
 It's better to use transaction log files rather than output file
 here.
Fix as you pointed out.

 SECS should be INTERVAL for the sake of consistency with
 --stat-interval's help message?
Fix as you pointed out.

 + * fsync_interval controls how often we flush to the received
 + * WAL file, in seconds.
 
 seconds should be miliseconds?
Fix as you pointed out.

 The patch changes pg_receivexlog so that it keep processing the
 continuous messages without calling stream_stop(). But as I commented
 before, stream_stop should be called every time the message is received?
 Otherwise pg_basebackup background WAL receiver might not be able to stop
 streaming at proper point.
FIx the calling stream_stop() with 1 message processing is complete. 

 The flush interval is checked and WAL files are flushed only when
 CopyStreamReceive() returns 0, i.e., there is no message available and
 the timeout occurs. Why did you do that? I'm afraid that
 CopyStreamReceive() always waits for at least one second before WAL files
 are flushed even when --fsync-interval is set to 0.
CopyStreamReceive() is according to pg_recvlogical --fsync-interval and 
--status-interval shorter intervals runs the wait.
About specifying an interval of zero.
Every flush to continuously message,  so no problem will wait one second.

 Why don't you update output_last_status when WAL file is flushed and
 closed?
About the closed, add the update step.
About the flush, according to pg_recvlogical, update is performed after an 
interval check before flush. 
Therefore not modify.

Regards,

--
Furuya Osamu



pg_receivexlog-add-fsync-interval-v2.patch
Description: pg_receivexlog-add-fsync-interval-v2.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql: show only failed queries

2014-07-10 Thread Pavel Stehule
2014-07-10 7:32 GMT+02:00 Fujii Masao masao.fu...@gmail.com:

 On Wed, Jul 9, 2014 at 9:44 PM, Pavel Stehule pavel.steh...@gmail.com
 wrote:
  Barring any objection, I will commit this patch except tab-completion
  part.

 Committed.


Thank you very much


  It can be a second discussion, but I am thinking so anywhere when we can
 use
  autocomplete, then we should it - Although it should not to substitute
  documentation, it is fast help about available options, mainly in
 situation
  where variables can hold a relative different content. I can prepare, if
  there will not any objection addition patch with complete autocomplete
 for
  all psql variables described in doc.

 I have no objection. But I found that the tab-completion for psql
 variables names
 are not complete. For example, COMP_KEYWORD_CASE is not displayed.
 Probably we should address this problem, too.


I prepare patch for next commitfest - it is relative trivial task

Pavel



 Regards,

 --
 Fujii Masao



Re: [HACKERS] inherit support for foreign tables

2014-07-10 Thread Shigeru Hanada
Hi Fujita-san,

Sorry for leaving this thread for long time.

2014-06-24 16:30 GMT+09:00 Etsuro Fujita fujita.ets...@lab.ntt.co.jp:
 (2014/06/23 18:35), Ashutosh Bapat wrote:

 Hi,
 Selecting tableoid on parent causes an error, ERROR:  cannot extract
 system attribute from virtual tuple. The foreign table has an OID which
 can be reported as tableoid for the rows coming from that foreign table.
 Do we want to do that?


 No.  I think it's a bug.  I'll fix it.

IIUC, you mean that tableoid can't be retrieved when a foreign table
is accessed via parent table, but it sounds strange to me, because one
of main purposes of tableoid is determine actual source table in
appended results.

Am I missing the point?

-- 
Shigeru HANADA


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.5: UPDATE/DELETE .. ORDER BY .. LIMIT ..

2014-07-10 Thread Marko Tiikkaja

On 7/10/14 5:44 AM, Amit Kapila wrote:

Basically, I wanted to say that apart from modified columns, we just
need to pass table OID.  If I am reading correctly, the same is
mentioned by Heikki as well.


Yes, Heikki was talking about that approach.  I was more interested in 
the significantly more invasive approach Tom and Andres talked about 
upthread, which your email was a response to.




.marko


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WAL replay bugs

2014-07-10 Thread Kyotaro HORIGUCHI
Hello, The new patch looks good for me.

The usage of this is a bit irregular as a (extension) module but
it is the nature of this 'contrib'. The rearranged page type
detection logic seems neater and keeps to work as intended. This
logic will be changed after the new page type detection scheme
becomes ready by the another patch.


I have some additional comments, which should be the last
ones. All of the comments are about test.sh.

- A question mark seems missing from the end of the message Has
  build been done with -DBUFFER_CAPTURE included in CFLAGS in
  test.sh.

- make check runs regress --use-existing but IMHO the make
  target which is expected to do that is installcheck. I had
  fooled to convince that it should run the postgres which is
  built dedicatedly:(

- postgres processes are left running after
  test_default(custom).sh has stopped halfway. This can be fixed
  with the attached patch, but, to be honest, this seems too
  much. I'll follow your decision whether or not to do this.
  (bufcapt_test_sh_20140710.patch)

- test_default.sh is not only an example script which will run
  while utilize this facility, but the test script for this
  facility itself.

  So I think it would be better be added some queries so that all
  possible page types available for the default testing. What do
  you think about the attached patch?  (hash index is unlogged
  but I dared to put it for clarity.)

  (bufcapt_test_default_sh_20140710.patch)

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
diff --git a/contrib/buffer_capture_cmp/test.sh b/contrib/buffer_capture_cmp/test.sh
index 89740bb..ba5e290 100644
--- a/contrib/buffer_capture_cmp/test.sh
+++ b/contrib/buffer_capture_cmp/test.sh
@@ -117,16 +117,27 @@ pg_ctl -w -D $TEST_STANDBY start
 # Check the presence of custom tests and kick them in priority. If not,
 # fallback to the default tests. Tests need only to be run on the master
 # node.
+
 if [ -f ./test-custom.sh ]; then
-	. ./test-custom.sh
+	TEST_SCRIPT=test-custom.sh
 else
-	. ./test-default.sh
+	TEST_SCRIPT=test-default.sh
 fi
 
+set +e
+bash -e $TEST_SCRIPT
+EXITSTATUS=$?
+set -e
+
 # Time to stop the nodes as tests have run
 pg_ctl -w -D $TEST_MASTER stop
 pg_ctl -w -D $TEST_STANDBY stop
 
+if [ $EXITSTATUS != 0 ]; then
+	echo $TEST_SCRIPT exited by error
+	exit 1;
+fi
+
 DIFF_FILE=capture_differences.txt
 
 # Check if the capture files exist. If not, build may have not been
diff --git a/contrib/buffer_capture_cmp/test-default.sh b/contrib/buffer_capture_cmp/test-default.sh
index 5bec503..24091ff 100644
--- a/contrib/buffer_capture_cmp/test-default.sh
+++ b/contrib/buffer_capture_cmp/test-default.sh
@@ -11,4 +11,16 @@
 # cd $ROOT_DIR
 
 # Create a simple table
-psql -c 'CREATE TABLE aa AS SELECT generate_series(1, 10) AS a'
+psql -c 'CREATE TABLE tbtree AS SELECT generate_series(1, 10) AS a'
+psql -c 'CREATE INDEX i_tbtree ON tbtree USING btree(a)'
+psql -c 'CREATE TABLE thash AS SELECT generate_series(1, 10) AS a'
+psql -c 'CREATE INDEX i_thash ON thash USING hash(a)'
+psql -c 'CREATE TABLE tgist AS SELECT POINT(a, a) AS p1 FROM generate_series(0, 10) a'
+psql -c 'CREATE INDEX i_tgist ON tgist USING gist(p1)'
+psql -c 'CREATE TABLE tspgist AS SELECT POINT(a, a) AS p1 FROM generate_series(0, 10) a'
+psql -c 'CREATE INDEX i_tspgist ON tspgist USING spgist(p1)'
+psql -c 'CREATE TABLE tgin AS SELECT ARRAY[a/10, a%10] as a1 from generate_series(0, 10) a'
+psql -c 'CREATE INDEX i_tgin ON tgin USING gin(a1)'
+psql -c 'CREATE SEQUENCE sq1'
+
+

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Use unique index for longer pathkeys.

2014-07-10 Thread Kyotaro HORIGUCHI
Thank you for taking a look on this patch.

 I took a quick look at this patch, more or less because nobody else did.
 
  Duing last CF, I proposed to match long pathkeys against index columns
  during creating index paths. This worked fine but also it is difficult
  to make sure that all side-effects are eliminated. Finally Tom Lane
  suggested to truncate pathkeys while generation of the pathkeys
  itself. So this patch comes.
 
 I found your older patch quite straightforward to understand, but the
 new one much more difficult to follow (but that's not saying much, I
 am not very familiar with the planner code in general).

I think it's quite natural to think so.

 Do you have any references to the discussion about the side-effects that
 needed to be eliminated with the earlier patch?

The biggest side-effects (or simplly defect) found so far is
discussed here,

http://www.postgresql.org/message-id/01bd01cf0b4e$9b960ad0$d2c22070$@ets...@lab.ntt.co.jp

This was caused by omitting the membership of the Var under
inspection while cheking if the pathkeys is extensible.

http://www.postgresql.org/message-id/20140107.145900.196068363.horiguchi.kyot...@lab.ntt.co.jp

After all, Tom said that the right way to do this is not such
whacking-a-mole thing but loosen pathkeys previously so that the
planner naturally do what the previous patch did without any
special treat.

http://www.postgresql.org/message-id/5212.1397599...@sss.pgh.pa.us

So the new patch comes.


regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_resetxlog to clear backup start/end locations.

2014-07-10 Thread Kyotaro HORIGUCHI
Hello,

   Anyway, I think that making -n option display all the values that -f 
   option
   changes would be useful. But since that's not a bugfix, we should apply it
   only in HEAD.
  
  Agreed.
 
 Is this a TODO item?

It's not that. The 'bug' was my wrong guess and I've found that
it is done by default from the first..

I'm sorry to have bothered you.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: how to find the order of joins from Explain command XML plan output in PostgreSQL

2014-07-10 Thread Guillaume Lelarge
Le 9 juil. 2014 20:36, David G Johnston david.g.johns...@gmail.com a
écrit :

 csrajmohan wrote
  EXPLAIN (format XML)  command in PostgreSQL9.3.4 gives the plan chosen
  by
  the optimizer in XML format. In my program, I have to extract certain
data
  about optimizer plan from this XML output. I am using *LibXML2* library
  for
  parsing the XML. I had successfully extracted information about which
  relations are involved and what joins are used by parsing the XML. But
  I am *unable
  to extract the* *order of joining the relations from the XML output*. I
  conceptually understood that the reverse level order traversal of binary
  tree representation of the XML plan will give correct ordering of joins
  applied. But I could not figure out how do I get that from the XML? Does
  libXML2 support anything of this sort? If not how should I proceed to
  tackle this?

 So, since nothing better has been forthcoming in your other two posts on
 this topic I'll just say that likely you will have much better luck using
 SAX-based processing as opposed to DOM-based processing.  I seriously
doubt
 native/core PostgreSQL facilities will allow you to do what you desire.

 As you said, hierarchy and physical output order determines the order of
 joining within the planner so you have to capture and track such
relational
 information during your processing - which is made much easier if you
simply
 traverse the output node-by-node exactly as a SAX based parser does.

 Though pgAdminIII has a visual query display that you might look at for
 inspiration.


FWIW, pgadmin's visual explain doesn't (yet?) use XML or json or yaml
output.


Re: [HACKERS] add line number as prompt option to psql

2014-07-10 Thread Jeevan Chalke
Hi,

Found few more bugs in new code:

A:
This got bad:

jeevan@ubuntu:~/pg_master$ ./install/bin/psql postgres
psql (9.5devel)
Type help for help.

postgres=# \set PROMPT1 '%/[%l]%R%# '
postgres[1]=# \set PROMPT2 '%/[%l]%R%# '
postgres[1]=# select
postgres[2]-# *
postgres[3]-# from
postgres[4]-# abc;
ERROR:  syntax error at or near fromabc
LINE 1: select*fromabc;
   ^
postgres[1]=#
postgres[1]=#
postgres[1]=# \e
ERROR:  syntax error at or near fromabc
LINE 1: select*fromabc;
   ^
postgres[1]=# select*fromabc;
ERROR:  syntax error at or near fromabc
LINE 1: select*fromabc;
   ^
postgres[1]=#


See query text in LINE 1:. This is because, you have removed addition of
newline character. Related added_nl_pos. Need more investigation here.
However I don't think these changes are relevant to what you wanted in this
feature.
Will you please explain the idea behind these changes ?

Moreover, if you don't want to add newline character, then I guess entire
logic related to added_nl_pos is NO more required. You may remove this
variable and its logic altogether, not sure though. Also make sure you
update the relevant comments while doing so. Here you have removed the code
which is adding the newline but the comment there still reads:
/* insert newlines into query buffer between source lines */

Need more thoughts on this.


B:
postgres=# \set PROMPT1 '%/[%l]%R%# '
postgres[1]=# \set PROMPT2 '%/[%l]%R%# '
postgres[1]=# \e
postgres[-2147483645]-# limit 1;
   relname
--
 pg_statistic
(1 row)

postgres[1]=#
postgres[1]=# select
relname
from
pg_class
limit 1;

Logic related to wrapping around the cur_line counter is wrong. Actually
issue is with newline variable. If number of lines in \e editor goes beyond
INT_MAX (NOT sure about the practical use), then newline will be -ve which
then enforces cur_line to be negative. To mimic this I have initialized
newline = INT_MAX - 1.

Thanks

-- 
Jeevan B Chalke
Principal Software Engineer, Product Development
EnterpriseDB Corporation
The Enterprise PostgreSQL Company


Re: [HACKERS] psql: show only failed queries

2014-07-10 Thread Pavel Stehule
Hello

here is a proposed patch - autocomplete for known psql variable content

Regards

Pavel


2014-07-10 9:50 GMT+02:00 Pavel Stehule pavel.steh...@gmail.com:




 2014-07-10 7:32 GMT+02:00 Fujii Masao masao.fu...@gmail.com:

 On Wed, Jul 9, 2014 at 9:44 PM, Pavel Stehule pavel.steh...@gmail.com
 wrote:
  Barring any objection, I will commit this patch except tab-completion
  part.

 Committed.


 Thank you very much


  It can be a second discussion, but I am thinking so anywhere when we
 can use
  autocomplete, then we should it - Although it should not to substitute
  documentation, it is fast help about available options, mainly in
 situation
  where variables can hold a relative different content. I can prepare, if
  there will not any objection addition patch with complete autocomplete
 for
  all psql variables described in doc.

 I have no objection. But I found that the tab-completion for psql
 variables names
 are not complete. For example, COMP_KEYWORD_CASE is not displayed.
 Probably we should address this problem, too.


 I prepare patch for next commitfest - it is relative trivial task

 Pavel



 Regards,

 --
 Fujii Masao



commit 7cba776aea228165c5b65f7077515328a0efa039
Author: root root@localhost.localdomain
Date:   Thu Jul 10 14:54:36 2014 +0200

initial concept

diff --git a/src/bin/psql/startup.c b/src/bin/psql/startup.c
index 5a397e8..f05dfe2 100644
--- a/src/bin/psql/startup.c
+++ b/src/bin/psql/startup.c
@@ -147,6 +147,8 @@ main(int argc, char *argv[])
 	SetVariable(pset.vars, PROMPT2, DEFAULT_PROMPT2);
 	SetVariable(pset.vars, PROMPT3, DEFAULT_PROMPT3);
 
+	SetVariable(pset.vars, COMP_KEYWORD_CASE, preserve-upper);
+
 	parse_psql_options(argc, argv, options);
 
 	/*
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index bab0357..41ae499 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3592,6 +3592,79 @@ psql_completion(const char *text, int start, int end)
 	{
 		matches = complete_from_variables(text, , );
 	}
+	else if (strcmp(prev2_wd, \\set) == 0)
+	{
+		if (strcmp(prev_wd, AUTOCOMMIT) == 0)
+		{
+			static const char *const my_list[] =
+			{on, off, interactive, NULL};
+
+			COMPLETE_WITH_LIST_CS(my_list);
+		}
+		else if (strcmp(prev_wd, COMP_KEYWORD_CASE) == 0)
+		{
+			static const char *const my_list[] =
+			{lower, upper, preserve-lower, preserve-upper, NULL};
+
+			COMPLETE_WITH_LIST_CS(my_list);
+		}
+		else if (strcmp(prev_wd, ECHO) == 0)
+		{
+			static const char *const my_list[] =
+			{none, errors, queries, all, NULL};
+
+			COMPLETE_WITH_LIST_CS(my_list);
+		}
+		else if (strcmp(prev_wd, ECHO_HIDDEN) == 0)
+		{
+			static const char *const my_list[] =
+			{noexec, off, on, NULL};
+
+			COMPLETE_WITH_LIST_CS(my_list);
+		}
+		else if (strcmp(prev_wd, ON_ERROR_ROLLBACK) == 0)
+		{
+			static const char *const my_list[] =
+			{on, off, interactive, NULL};
+
+			COMPLETE_WITH_LIST_CS(my_list);
+		}
+		else if (strcmp(prev_wd, ON_ERROR_STOP) == 0)
+		{
+			static const char *const my_list[] =
+			{on, off, NULL};
+
+			COMPLETE_WITH_LIST_CS(my_list);
+		}
+		else if (strcmp(prev_wd, QUIET) == 0)
+		{
+			static const char *const my_list[] =
+			{on, off, NULL};
+
+			COMPLETE_WITH_LIST_CS(my_list);
+		}
+		else if (strcmp(prev_wd, SINGLELINE) == 0)
+		{
+			static const char *const my_list[] =
+			{on, off, NULL};
+
+			COMPLETE_WITH_LIST_CS(my_list);
+		}
+		else if (strcmp(prev_wd, SINGLESTEP) == 0)
+		{
+			static const char *const my_list[] =
+			{on, off, NULL};
+
+			COMPLETE_WITH_LIST_CS(my_list);
+		}
+		else if (strcmp(prev_wd, VERBOSITY) == 0)
+		{
+			static const char *const my_list[] =
+			{default, verbose, terse, NULL};
+
+			COMPLETE_WITH_LIST_CS(my_list);
+		}
+	}
 	else if (strcmp(prev_wd, \\sf) == 0 || strcmp(prev_wd, \\sf+) == 0)
 		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_functions, NULL);
 	else if (strcmp(prev_wd, \\cd) == 0 ||

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Missing IPv6 for pgbuildfarm.org

2014-07-10 Thread Andrew Dunstan


On 07/02/2014 05:08 AM, Torsten Zuehlsdorff wrote:

Hello,

i've tried to setup a FreeBSD 10 machine as buildfarm-member. But it's 
an IPv6 only machine and there is no IPv6 for the homepage.


Can anyone add support for IPv6 to it?




I'm looking into this.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] add line number as prompt option to psql

2014-07-10 Thread Sawada Masahiko
On Thu, Jul 10, 2014 at 8:35 PM, Jeevan Chalke
jeevan.cha...@enterprisedb.com wrote:
 Hi,

 Found few more bugs in new code:

 A:
 This got bad:

 jeevan@ubuntu:~/pg_master$ ./install/bin/psql postgres
 psql (9.5devel)
 Type help for help.


 postgres=# \set PROMPT1 '%/[%l]%R%# '
 postgres[1]=# \set PROMPT2 '%/[%l]%R%# '
 postgres[1]=# select
 postgres[2]-# *
 postgres[3]-# from
 postgres[4]-# abc;
 ERROR:  syntax error at or near fromabc
 LINE 1: select*fromabc;

^
 postgres[1]=#
 postgres[1]=#
 postgres[1]=# \e
 ERROR:  syntax error at or near fromabc
 LINE 1: select*fromabc;
^
 postgres[1]=# select*fromabc;
 ERROR:  syntax error at or near fromabc
 LINE 1: select*fromabc;
^
 postgres[1]=#


 See query text in LINE 1:. This is because, you have removed addition of
 newline character. Related added_nl_pos. Need more investigation here.
 However I don't think these changes are relevant to what you wanted in this
 feature.
 Will you please explain the idea behind these changes ?

 Moreover, if you don't want to add newline character, then I guess entire
 logic related to added_nl_pos is NO more required. You may remove this
 variable and its logic altogether, not sure though. Also make sure you
 update the relevant comments while doing so. Here you have removed the code
 which is adding the newline but the comment there still reads:
 /* insert newlines into query buffer between source lines */

 Need more thoughts on this.


 B:

 postgres=# \set PROMPT1 '%/[%l]%R%# '
 postgres[1]=# \set PROMPT2 '%/[%l]%R%# '
 postgres[1]=# \e
 postgres[-2147483645]-# limit 1;
relname
 --
  pg_statistic
 (1 row)


 postgres[1]=#
 postgres[1]=# select
 relname
 from
 pg_class
 limit 1;

 Logic related to wrapping around the cur_line counter is wrong. Actually
 issue is with newline variable. If number of lines in \e editor goes beyond
 INT_MAX (NOT sure about the practical use), then newline will be -ve which
 then enforces cur_line to be negative. To mimic this I have initialized
 newline = INT_MAX - 1.


Thank you for reviewing the patch with variable cases.
I have revised the patch, and attached latest patch.

 A:
 Will you please explain the idea behind these changes ?
I thought wrong about adding new to tail of query_buf.
The latest patch does not change related to them.

 B:
I added the condition of cur_line  0.


Regards,

---
Sawada Masahiko


psql-line-number_v4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] timeout of pg_receivexlog --status-interval

2014-07-10 Thread Sawada Masahiko
Hi,

At 1047 line of receivelog.c:CopyStreamPoll(), we set NULL to
timeoutptr variable.
if the value of timeoutprt is set NULL then the process will wait
until can read socket using by select() function as following.

if (timeout_ms  0)
timeoutptr = NULL;
else
{
timeout.tv_sec = timeout_ms / 1000L;
timeout.tv_usec = (timeout_ms % 1000L) * 1000L;
timeoutptr = timeout;
}

ret = select(PQsocket(conn) + 1, input_mask, NULL, NULL, timeoutptr);

But the 1047 line of receivelog.c is never executed because the value
of timeout_ms is NOT allowed less than 0 at CopyStreamReceive which is
only one function calls CopyStreamPoll().
The currently code, if we specify -s to 0 then CopyStreamPoll()
function is never called.
And the pg_receivexlog will be execute PQgetCopyData() and failed, in
succession.

I think that it is contradiction, and should execute select() function
with NULL of fourth argument.
the attached patch allows to execute select() with NULL, i.g.,
pg_receivexlog.c will wait until can  read socket without timeout, if
-s is specified to 0.

Regards,
---
Sawada Masahiko


receivexlog-timout.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposing pg_hibernate

2014-07-10 Thread MauMau

Hello,

I've finished reviewing the code.  I already marked this patch as waiting on 
author.  I'll be waiting for the revised patch, then proceed to running the 
program when the patch seems reasonable.


(12)
Like worker_spi, save and restore errno in signal handlers.


(13)
Remove the following comment, and similar ones if any, because this module 
is to be included in 9.5 and above.


/*
 * In Postgres version 9.4 and above, we use the dynamic background worker
 * infrastructure for BlockReaders, and the BufferSaver process does the
 * legwork of registering the BlockReader workers.
 */


(14)
Expand the body following functions at their call sites and remove the 
function definition, because they are called only once.  It would be 
straightforward to see the processing where it should be.


* DefineGUCs
* CreateDirectory


(15)
I don't understand the use case of pg_hibernator.enabled.  Is this really 
necessary?  In what situations does this parameter really matter?  If it's 
not crucial, why don't we remove it and make the specification simpler?



(16)
As mentioned above, you can remove CreateDirectory().  Instead, you can just 
mkdir() unconditionally and check the result, without first stat()ing.



(17)
Use AllocateDir/ReadDir/FreeDir instead of opendir/readdir/closedir in the 
server process.  Plus, follow the error handling style of other source files 
using AllocateDir.



(18)
The local variable hibernate_dir appears to be unnecessary because it always 
holds SAVE_LOCATION as its value.  If so, remove it.



(19)
I think the severity below should better be WARNING, but I don't insist.

ereport(LOG, (errmsg(registration of background worker failed)));


(20)
iff should be if.

/* Remove the element from pending list iff we could register a worker 
successfully. */




(21)
How is this true?  Does the shared system catalog always have at least one 
shared buffer?


  /* Make sure the first buffer we save belongs to global object. */
  Assert(buf-database == InvalidOid);
...
   * Special case for global objects. The sort brings them to the
   * front of the list.


(22)
The format should be %u, not %d.

 ereport(log_level,
   (errmsg(writer: writing block db %d filenode %d forknum %d blocknum 
%d,

 database_counter, prev_filenode, prev_forknum, buf-blocknum)));


(23)
Why is the first while loop in BlockReaderMain() necessary?  Just opening 
the target save file isn't enough?




(24)
Use MemoryContextAllocHuge().  palloc() can only allocate chunks up to 1GB.

 * This is not a concern as of now, so deferred; there's at least one other
 * place that allocates (NBuffers * (much_bigger_struct)), so this seems to
 * be an acceptable practice.
 */

saved_buffers = (SavedBuffer *) palloc(sizeof(SavedBuffer) * NBuffers);


(25)
It's better for the .save files to be created per tablespace, not per 
database.  Tablespaces are more likely to be allocated on different storage 
devices for I/O distribution and capacity.  So, it would be more natural to 
expect that we can restore buffers more quickly by letting multiple Block 
Readers do parallel I/O on different storage devices.



(26)
Reading the below description in the documentation, it seems that Block 
Readers can exit with 0 upon successful completion, because bgw_restart_time 
is set to BGW_NEVER_RESTART.


If bgw_restart_time for a background worker is configured as 
BGW_NEVER_RESTART, or if it exits with an exit code of 0 or is terminated by 
TerminateBackgroundWorker, it will be automatically unregistered by the 
postmaster on exit.



(27)
As others said, I also think that Buffer Saver should first write to a temp 
file, say *.tmp, then rename it to *.save upon completion.  That prevents 
the Block Reader from reading possibly corrupted half-baked file that does 
not represent any hibernated state.



Regards
MauMau




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] IMPORT FOREIGN SCHEMA statement

2014-07-10 Thread Tom Lane
I wrote:
 Michael Paquier michael.paqu...@gmail.com writes:
 I guess that this implementation is enough as a first shot, particularly
 regarding the complexity that default expression import would bring up for
 postgres_fdw (point raised during the review, but not really discussed
 afterwards).

 Yeah.  I'm actually more concerned about the lack of collation import,
 but that's unfixable unless we can figure out how to identify collations
 in a platform-independent way.

I had second thoughts about this: now that the code is generating a
textual CREATE FOREIGN TABLE command, it would actually be just about
trivial to pull back the text representation of the remote's default
expression and attempt to apply it to the foreign table.  Likewise we
could ignore the platform-dependency issues in collations and just
attempt to apply whatever collation name is reported by the remote.

Arguably both of these things would be too risky to do by default,
but if we make them be controlled by IMPORT options, why not?

In the case of a default expression, AFAICS there are two possible
failure modes.  The local server might not have some function or
operator used by the remote; in which case, you'd get a pretty easy
to understand error message, and it would be obvious what to do if
you wanted to fix it.  Or we might have a similarly-named function
or operator that behaves differently (an important special case
is where the function is volatile and you'd get a different answer
locally, eg nextval()).  Well, that's a problem, and we'd need to
document it, but I don't see that that risk is so bad that we should
refuse to import any default expressions ever.  Especially when
common cases like 0 or now() can be expected to work fine.

Similarly, in the case of a collation name, we might not have the
same locale name, which would be annoying but not dangerous.  Or
we might have a similarly-named locale that actually has sorting
rules different from the remote's.  But that would only be a real
risk when pulling from a remote that's on a different OS from the
local server, and in any case whatever discrepancy might exist is
unlikely to be worse than failing to import a collation spec at all.
Not importing the remote's spec is *guaranteed* to be wrong.

So I propose we invent a couple more import options, say
import_default and import_collate, and make the postgres_fdw
code do the obvious thing with them.  import_default should
probably default to false, but I'm about halfway tempted to
say that import_collate should default to true --- if you're
importing a collatable column and you don't have a matching
locale locally, it seems like it'd be better if we complain
about that by default.

Comments?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] IMPORT FOREIGN SCHEMA statement

2014-07-10 Thread Tom Lane
Ronan Dunklau ronan.dunk...@dalibo.com writes:
 Similarly to what we do for the schema, we should also check that the server 
 in the parsed statement is the one we are importing from.

Hm.  The FDW would really have to go out of its way to put the wrong thing
there, so is this worth the trouble?  It's not like the creation schema
where you can easily omit it from the command; the syntax requires it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade and toast tables bug discovered

2014-07-10 Thread Robert Haas
On Wed, Jul 9, 2014 at 12:09 PM, Bruce Momjian br...@momjian.us wrote:
 To me, that sounds vastly more complicated and error-prone than
 forcing the TOAST tables to be added in a second pass as Andres
 suggested.

 But I just work here.

 Agreed.  I am now thinking we could harness the code that already exists
 to optionally add a TOAST table as part of ALTER TABLE ADD COLUMN.  We
 would just need an entry point to call it from pg_upgrade, either via an
 SQL command that checks (and hopefully doesn't do anything else), or a C
 function that does it, e.g. VACUUM would be trivial to run on every
 database, but I don't think it tests that;  is _could_ in binary_upgrade
 mode.  However, the idea of having a C function plug into the guts of
 the server and call internal functions makes me uncomforable.

Well, pg_upgrade_support's charter is basically to provide access to
the guts of the server in ways we wouldn't normally allow; all that
next-OID stuff is basically exactly that.  So I don't think this is
such a big deal.  It needs to be properly commented, of course.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] wrapping in extended mode doesn't work well with default pager

2014-07-10 Thread Greg Stark
On Mon, Jul 7, 2014 at 8:35 AM, Sergey Muraviov
sergey.k.murav...@gmail.com wrote:
 So what's wrong with the patch?
 And what should I change in it for 9.5?

Possibly nothing. The concern was tha it's modifying the output in
cases where the output is not \expanded and/or not wrapped. Now I've
mostly convinced myself that those cases should change to be
consistent with the wrapped output but there was at least one tool
depending on that format (check_postgres) so perhaps it's not
worthwhile and having it be inconsistent would be safer.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RLS Design

2014-07-10 Thread Robert Haas
On Wed, Jul 9, 2014 at 2:13 AM, Stephen Frost sfr...@snowman.net wrote:
 Robert,

 * Robert Haas (robertmh...@gmail.com) wrote:
 If you're going to have predicates be table-level and access grants be
 table-level, then what's the value in having policies?  You could just
 do:

 ALTER TABLE table_name GRANT ROW ACCESS TO role_name USING quals;

 Yes, this would be possible (and is nearly identical to the original
 patch, except that this includes per-role considerations), however, my
 thinking is that it'd be simpler to work with policy names rather than
 sets of quals, to use when mapping to roles, and they would potentially
 be useful later for other things (eg: for setting up which policies
 should be applied when, or which should be OR' or ANDd with other
 policies, or having groups of policies, etc).

Hmm.  I guess that's reasonable.  Should the policy be a per-table
object (like rules, constraints, etc.) instead of a global object?

You could do:

ALTER TABLE table_name ADD POLICY policy_name (quals);
ALTER TABLE table_name POLICY FOR role_name IS policy_name;
ALTER TABLE table_name DROP POLICY policy_name;

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pgindent weirdness

2014-07-10 Thread Robert Haas
In contrib/test_shm_mq/test.c, the 9.4 pgindent run
(0a7832005792fa6dad171f9cadb8d587fe0dd800) did this:

-PG_MODULE_MAGIC;
-PG_FUNCTION_INFO_V1(test_shm_mq);
+PG_MODULE_MAGIC; PG_FUNCTION_INFO_V1(test_shm_mq);
 PG_FUNCTION_INFO_V1(test_shm_mq_pipelined);

This is obviously not an improvement.  Is there some formatting rule
that I violated in the original code that lead to this, or what?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minmax indexes

2014-07-10 Thread Claudio Freire
On Wed, Jul 9, 2014 at 6:16 PM, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
 Another thing I noticed is that version 8 of the patch blindly believed
 the pages_per_range declared in catalogs.  This meant that if somebody
 did alter index foo set pages_per_range=123 the index would
 immediately break (i.e. return corrupted results when queried).  I have
 fixed this by storing the pages_per_range value used to construct the
 index in the metapage.  Now if you do the ALTER INDEX thing, the new
 value is only used when the index is recreated by REINDEX.

This seems a lot like parameterizing. So I guess the only thing left
is to issue a NOTICE when said alter takes place (I don't see that on
the patch, but maybe it's there?)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgindent weirdness

2014-07-10 Thread Bruce Momjian
On Thu, Jul 10, 2014 at 12:02:50PM -0400, Robert Haas wrote:
 In contrib/test_shm_mq/test.c, the 9.4 pgindent run
 (0a7832005792fa6dad171f9cadb8d587fe0dd800) did this:
 
 -PG_MODULE_MAGIC;
 -PG_FUNCTION_INFO_V1(test_shm_mq);
 +PG_MODULE_MAGIC; PG_FUNCTION_INFO_V1(test_shm_mq);
  PG_FUNCTION_INFO_V1(test_shm_mq_pipelined);
 
 This is obviously not an improvement.  Is there some formatting rule
 that I violated in the original code that lead to this, or what?

Uh, ever other case of PG_MODULE_MAGIC had blank lines before/after this
define.  I went and added that to test_shm_mq/test.c, and adjusted other
blank lines to be consistent.  I did not modify 9.4 as this cosmetic.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] IMPORT FOREIGN SCHEMA statement

2014-07-10 Thread Tom Lane
I wrote:
 So I propose we invent a couple more import options, say
 import_default and import_collate, and make the postgres_fdw
 code do the obvious thing with them.  import_default should
 probably default to false, but I'm about halfway tempted to
 say that import_collate should default to true --- if you're
 importing a collatable column and you don't have a matching
 locale locally, it seems like it'd be better if we complain
 about that by default.

I've committed this patch with that addition and some minor additional
cleanup.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minmax indexes

2014-07-10 Thread Alvaro Herrera
Claudio Freire wrote:
 On Wed, Jul 9, 2014 at 6:16 PM, Alvaro Herrera alvhe...@2ndquadrant.com 
 wrote:
  Another thing I noticed is that version 8 of the patch blindly believed
  the pages_per_range declared in catalogs.  This meant that if somebody
  did alter index foo set pages_per_range=123 the index would
  immediately break (i.e. return corrupted results when queried).  I have
  fixed this by storing the pages_per_range value used to construct the
  index in the metapage.  Now if you do the ALTER INDEX thing, the new
  value is only used when the index is recreated by REINDEX.
 
 This seems a lot like parameterizing.

I don't understand what that means -- care to elaborate?

 So I guess the only thing left is to issue a NOTICE when said alter
 takes place (I don't see that on the patch, but maybe it's there?)

That's not in the patch.  I don't think we have an appropriate place to
emit such a notice.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Missing autocomplete for CREATE DATABASE

2014-07-10 Thread Magnus Hagander
It seems psql is missing autocomplete entries for LC_COLLATE and
LC_CTYPE for the CREATE DATABASE command. Attached patch adds that.

I doubt this is important enough to backpatch - thoughts?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3bb727f..f746ddc 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -2046,7 +2046,7 @@ psql_completion(const char *text, int start, int end)
 	{
 		static const char *const list_DATABASE[] =
 		{OWNER, TEMPLATE, ENCODING, TABLESPACE, CONNECTION LIMIT,
-		NULL};
+		 LC_COLLATE, LC_CTYPE, NULL};
 
 		COMPLETE_WITH_LIST(list_DATABASE);
 	}

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tweaking NTUP_PER_BUCKET

2014-07-10 Thread Tomas Vondra
On 9.7.2014 16:07, Robert Haas wrote:
 On Tue, Jul 8, 2014 at 5:16 PM, Tomas Vondra t...@fuzzy.cz wrote:
 Thinking about this a bit more, do we really need to build the hash
 table on the first pass? Why not to do this:

 (1) batching
 - read the tuples, stuff them into a simple list
 - don't build the hash table yet

 (2) building the hash table
 - we have all the tuples in a simple list, batching is done
 - we know exact row count, can size the table properly
 - build the table
 
 We could do this, and in fact we could save quite a bit of memory if
 we allocated say 1MB chunks and packed the tuples in tightly instead
 of palloc-ing each one separately.  But I worry that rescanning the
 data to build the hash table would slow things down too much.

I did a quick test of how much memory we could save by this. The
attached patch densely packs the tuples into 32kB chunks (1MB seems way
too much because of small work_mem values, but I guess this might be
tuned based on number of tuples / work_mem size ...).

Tested on query like this (see the first message in this thread how to
generate the tables):

 QUERY PLAN
---
 Aggregate  (cost=2014697.64..2014697.65 rows=1 width=33) (actual
time=63796.270..63796.271 rows=1 loops=1)
   -  Hash Left Join  (cost=318458.14..1889697.60 rows=5016
width=33) (actual time=2865.656..61778.592 rows=5000 loops=1)
 Hash Cond: (o.id = i.id)
 -  Seq Scan on outer_table o  (cost=0.00..721239.16
rows=5016 width=4) (actual time=0.033..2676.234 rows=5000 loops=1)
 -  Hash  (cost=193458.06..193458.06 rows=1006 width=37)
(actual time=2855.408..2855.408 rows=1000 loops=1)
   Buckets: 1048576  Batches: 1  Memory Usage: 703125kB
   -  Seq Scan on inner_table i  (cost=0.00..193458.06
rows=1006 width=37) (actual time=0.044..952.802 rows=1000 loops=1)
 Planning time: 1.139 ms
 Execution time: 63889.056 ms
(9 rows)

I.e. it creates a single batch with ~700 MB of tuples. Without the
patch, top shows this:

 VIRTRESSHR S  %CPU %MEM   COMMAND
  2540356 1,356g   5936 R 100,0 17,6   postgres: EXPLAIN

and the MemoryContextStats added to MultiExecHash shows this:

HashBatchContext: 1451221040 total in 182 blocks; 2826592 free (11
chunks); 1448394448 used

So yeah, the overhead is pretty huge in this case - basicaly 100%
overhead, because the inner table row width is only ~40B. With wider
rows the overhead will be lower.

Now, with the patch it looks like this:

 VIRTRESSHR S  %CPU %MEM   COMMAND
  1835332 720200   6096 R 100,0  8,9   postgres: EXPLAIN

HashBatchContext: 729651520 total in 21980 blocks; 0 free (0 chunks);
729651520 used

So, pretty much no overhead at all. It was slightly faster too (~5%) but
I haven't done much testing so it might be measurement error.

This patch is pretty independent of the other changes discussed here
(tweaking NTUP_PER_BUCKET / nbuckets) so I'll keep it separate.

regards
Tomas
diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c
index 589b2f1..18fd4a9 100644
--- a/src/backend/executor/nodeHash.c
+++ b/src/backend/executor/nodeHash.c
@@ -47,6 +47,7 @@ static void ExecHashSkewTableInsert(HashJoinTable hashtable,
 		int bucketNumber);
 static void ExecHashRemoveNextSkewBucket(HashJoinTable hashtable);
 
+static char * chunk_alloc(HashJoinTable hashtable, int tupleSize);
 
 /* 
  *		ExecHash
@@ -130,6 +131,9 @@ MultiExecHash(HashState *node)
 	if (node-ps.instrument)
 		InstrStopNode(node-ps.instrument, hashtable-totalTuples);
 
+	/* print some context stats */
+	MemoryContextStats(hashtable-batchCxt);
+	
 	/*
 	 * We do not return the hash table directly because it's not a subtype of
 	 * Node, and so would violate the MultiExecProcNode API.  Instead, our
@@ -223,6 +227,8 @@ ExecEndHash(HashState *node)
 	ExecEndNode(outerPlan);
 }
 
+/* 32kB chunks by default */
+#define CHUNK_SIZE	(32*1024L)
 
 /* 
  *		ExecHashTableCreate
@@ -294,6 +300,10 @@ ExecHashTableCreate(Hash *node, List *hashOperators, bool keepNulls)
 	hashtable-spaceAllowedSkew =
 		hashtable-spaceAllowed * SKEW_WORK_MEM_PERCENT / 100;
 
+	hashtable-chunk_data = NULL;
+	hashtable-chunk_used = 0;
+	hashtable-chunk_length = 0;
+
 	/*
 	 * Get info about the hash functions to be used for each hash key. Also
 	 * remember whether the join operators are strict.
@@ -717,8 +727,8 @@ ExecHashTableInsert(HashJoinTable hashtable,
 
 		/* Create the HashJoinTuple */
 		hashTupleSize = HJTUPLE_OVERHEAD + tuple-t_len;
-		hashTuple = (HashJoinTuple) MemoryContextAlloc(hashtable-batchCxt,
-	   hashTupleSize);
+		hashTuple = (HashJoinTuple) chunk_alloc(hashtable, hashTupleSize);
+
 		hashTuple-hashvalue = hashvalue;
 		

Re: [HACKERS] TODO : Allow parallel cores to be used by vacuumdb [ WIP ]

2014-07-10 Thread Jeff Janes
On Fri, Jun 27, 2014 at 4:10 AM, Dilip kumar dilip.ku...@huawei.com wrote:
 On 27 June 2014 02:57, Jeff Wrote,

 Based on that, I find most importantly that it doesn't seem to
 correctly vacuum tables which have upper case letters in the name,
 because it does not quote the table names when they need quotes.

 Thanks for your comments

 There are two problem
 First - When doing the vacuum of complete database that time if any table 
 with upper case letter, it was giving error
 --FIXED by adding quotes for table name

 Second - When user pass the table using -t option, and if it has uppercase 
 letter
 --This is the existing problem (without parallel implementation),

Just for the record, I don't think the second one is actually a bug.
If someone uses -t option from the command line, they are required to
provide the quotes if quotes are needed, just like they would need to
in psql.  That can be annoying to do from a shell, as you then need to
protect the quotes themselves from the shell, but that is the way it
is.

vacuumdb -t 'CrAzY QuOtE'
or
vacuumdb -t \CrAzY\ QuOtE\

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade and toast tables bug discovered

2014-07-10 Thread Bruce Momjian
On Thu, Jul 10, 2014 at 10:46:30AM -0400, Robert Haas wrote:
  Agreed.  I am now thinking we could harness the code that already exists
  to optionally add a TOAST table as part of ALTER TABLE ADD COLUMN.  We
  would just need an entry point to call it from pg_upgrade, either via an
  SQL command that checks (and hopefully doesn't do anything else), or a C
  function that does it, e.g. VACUUM would be trivial to run on every
  database, but I don't think it tests that;  is _could_ in binary_upgrade
  mode.  However, the idea of having a C function plug into the guts of
  the server and call internal functions makes me uncomforable.
 
 Well, pg_upgrade_support's charter is basically to provide access to
 the guts of the server in ways we wouldn't normally allow; all that
 next-OID stuff is basically exactly that.  So I don't think this is
 such a big deal.  It needs to be properly commented, of course.

If you look at how oid assignment is handled, it is done in a very
surgical way, i.e. pg_upgrade_support sets a global variable, and the
variable triggers different behavior in a CREATE command.  This change
would be far more invasive than that.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minmax indexes

2014-07-10 Thread Josh Berkus
On 07/10/2014 12:20 PM, Alvaro Herrera wrote:
 So I guess the only thing left is to issue a NOTICE when said alter
  takes place (I don't see that on the patch, but maybe it's there?)
 That's not in the patch.  I don't think we have an appropriate place to
 emit such a notice.

What do you mean by don't have an appropriate place?

The suggestion is that when a user does:

ALTER INDEX foo_minmax SET PAGES_PER_RANGE=100

they should get a NOTICE:

NOTICE: changes to pages per range will not take effect until the index
is REINDEXed

otherwise, we're going to get a lot of I Altered the pages per range,
but performance didn't change emails.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade and toast tables bug discovered

2014-07-10 Thread Andres Freund
On 2014-07-10 16:33:40 -0400, Bruce Momjian wrote:
 On Thu, Jul 10, 2014 at 10:46:30AM -0400, Robert Haas wrote:
   Agreed.  I am now thinking we could harness the code that already exists
   to optionally add a TOAST table as part of ALTER TABLE ADD COLUMN.  We
   would just need an entry point to call it from pg_upgrade, either via an
   SQL command that checks (and hopefully doesn't do anything else), or a C
   function that does it, e.g. VACUUM would be trivial to run on every
   database, but I don't think it tests that;  is _could_ in binary_upgrade
   mode.  However, the idea of having a C function plug into the guts of
   the server and call internal functions makes me uncomforable.
  
  Well, pg_upgrade_support's charter is basically to provide access to
  the guts of the server in ways we wouldn't normally allow; all that
  next-OID stuff is basically exactly that.  So I don't think this is
  such a big deal.  It needs to be properly commented, of course.
 
 If you look at how oid assignment is handled, it is done in a very
 surgical way, i.e. pg_upgrade_support sets a global variable, and the
 variable triggers different behavior in a CREATE command.  This change
 would be far more invasive than that.

Meh. It's only somewhat surgical because there's pg_upgrade specific
code sprinkled in the backend at strategic places. That's the contrary
of a clear abstraction barrier. And arguably a function call from a SQL
C function has a much clearer abstraction barrier.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minmax indexes

2014-07-10 Thread Jaime Casanova
On Thu, Jul 10, 2014 at 3:50 PM, Josh Berkus j...@agliodbs.com wrote:
 On 07/10/2014 12:20 PM, Alvaro Herrera wrote:
 So I guess the only thing left is to issue a NOTICE when said alter
  takes place (I don't see that on the patch, but maybe it's there?)
 That's not in the patch.  I don't think we have an appropriate place to
 emit such a notice.

 What do you mean by don't have an appropriate place?

 The suggestion is that when a user does:

 ALTER INDEX foo_minmax SET PAGES_PER_RANGE=100

 they should get a NOTICE:

 NOTICE: changes to pages per range will not take effect until the index
 is REINDEXed

 otherwise, we're going to get a lot of I Altered the pages per range,
 but performance didn't change emails.


How is this different from ALTER TABLE foo SET (FILLFACTOR=80);  or
from ALTER TABLE foo ALTER bar SET STORAGE EXTERNAL;  ?

we don't get a notice for these cases either

-- 
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
Phone: +593 4 5107566 Cell: +593 987171157


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minmax indexes

2014-07-10 Thread Alvaro Herrera
Josh Berkus wrote:
 On 07/10/2014 12:20 PM, Alvaro Herrera wrote:
  So I guess the only thing left is to issue a NOTICE when said alter
   takes place (I don't see that on the patch, but maybe it's there?)
  That's not in the patch.  I don't think we have an appropriate place to
  emit such a notice.
 
 What do you mean by don't have an appropriate place?

What I think should happen is that if the value is changed, the index
sholud be rebuilt right there.  But there is no way to have this occur
from the generic tablecmds.c code.  Maybe we should extend the AM
interface so that they are notified of changes and can take action.
Inserting AM-specific code into tablecmds.c seems pretty wrong to me --
existing stuff for WITH CHECK OPTION views notwithstanding.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for CSN based snapshots

2014-07-10 Thread Alvaro Herrera
Heikki Linnakangas wrote:

 Attached is a new patch. It now keeps the current pg_clog unchanged,
 but adds a new pg_csnlog besides it. pg_csnlog is more similar to
 pg_subtrans than pg_clog: it's not WAL-logged, is reset at startup,
 and segments older than GlobalXmin can be truncated.
 
 This addresses the disk space consumption, and simplifies pg_upgrade.
 
 There are no other significant changes in this new version, so it's
 still very much WIP. But please take a look!

Thanks.  I've been looking at this.  It needs a rebase; I had to apply
to commit 89cf2d52030 (Wed Jul 2 13:11:05 2014 -0400) because of
conflicts with the commit after that; it applies with some fuzz.  I read
the discussion in this thread and the README, to try to understand how
this is supposed to work.  It looks pretty good.

One thing not quite clear to me is the now-static RecentXmin,
GetRecentXmin(), AdvanceGlobalRecentXmin, and the like.  I found the
whole thing pretty confusing.  I noticed that ShmemVariableCache-recentXmin
is read without a lock in GetSnapshotData(), but exclusive ProcArrayLock
is taken to write to it in GetRecentXmin().  I think we can write it
without a lock also, since we assume that storing an xid is atomic.
With that change, I think you can make the acquisition of ProcArray Lock
to walk the pgproc list use shared mode, not exclusive.

Another point is that RecentXmin is no longer used directly, except in
TransactionIdIsActive().  But that routine is only used for an Assert()
now, so I think it's fine to just have GetRecentXmin() return the value
and not set RecentXmin as a side effect; my point is that ISTM
RecentXmin can be removed altogether, which makes that business simpler.


As far as GetOldestXmin() is concerned, that routine now ignores its
arguments.  Is that okay?  I imagine it's just a natural consequence of
how things work now.   [ ... reads some more code ... ]  Oh, now it's
only used to determine a freeze limit.  I think you should just remove
the arguments and make that whole thing simpler.  I was going to suggest
that AdvanceRecentGlobalXmin() could receive a possibly-NULL Relation
argument to pass down to GetOldestSnapshotGuts() which can make use of
it for a tigther limit, but since OldestXmin is only for freezing, maybe
there is no point in this extra complication.

Regarding the pendingGlobalXmin stuff, I didn't find any documentation.
I think you just need to write a very long comment on top of
AdvanceRecentGlobalXmin() to explain what it's doing.  After going over
the code half a dozen times I think I understand what's idea, and it
seems sound.

Not sure about the advancexmin_counter thing.  Maybe in some cases
sessions will only run 9 transactions or less and then finish, in which
case it will only get advanced at checkpoint time, which would be way
too seldom.  Maybe it would work to place the counter in shared memory:

acquire(spinlock);
if (ShmemVariableCache-counter++ = some_number)
{
   SI don't understand this:hmemVariableCache-counter = 0;
   do_update = true;
}
release(spinlock);
if (do_update)
   AdvanceRecentGlobalXmin();
(Maybe we can forgo the spinlock altogether?  If the value gets out of
hand once in a while, it shouldn't really matter)  Perhaps the some
number should be scaled to some fraction of MaxBackends, but not less
than X (32?) and no more than Y (128?).  Or maybe just use constant 10,
as the current code does.  But it'd be total number of transactions, not
transaction in the current backend.


I think it'd be cleaner to have a distinct typedef to use when
XLogRecPtr is used as a snapshot representation.  Right now there is no
clarity on whether we're interested in the xlog position itself or it's
a snapshot value.


HeapTupleSatisfiesVacuum[X]() and various callers needs update to their
comments: when OldestXmin is mentioned, it should be OldestSnapshot.

I noticed that SubTransGetTopmostTransaction() is now only called from
predicate.c, and it's pretty constrained in what it wants.  I'm not
implying that we want to do anything in your patch about it, other than
perhaps add a note to it that we may want to examine it later for
possible changes.

I haven't gone over the transam.c, clog.c changes yet.

I attach a couple of minor tweaks to the README, mostly for clarity (but
also update clog - csnlog in a couple of places).

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services
diff --git a/src/backend/access/transam/README b/src/backend/access/transam/README
index d11c817..23479b6 100644
--- a/src/backend/access/transam/README
+++ b/src/backend/access/transam/README
@@ -247,24 +247,24 @@ What we actually enforce is strict serialization of commits and rollbacks
 with snapshot-taking. We use the LSNs generated by Write-Ahead-Logging as
 a convenient monotonically increasing counter, to serialize commits with
 snapshots. Each commit is naturally assigned an LSN; it's the LSN of the
-commit WAL record. 

Re: [HACKERS] Minmax indexes

2014-07-10 Thread Josh Berkus
On 07/10/2014 02:30 PM, Jaime Casanova wrote:
 How is this different from ALTER TABLE foo SET (FILLFACTOR=80);  or
 from ALTER TABLE foo ALTER bar SET STORAGE EXTERNAL;  ?
 
 we don't get a notice for these cases either

Good idea.  We should also emit notices for those.  Well, maybe not for
fillfactor.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Missing autocomplete for CREATE DATABASE

2014-07-10 Thread Vik Fearing
On 07/10/2014 09:32 PM, Magnus Hagander wrote:
 It seems psql is missing autocomplete entries for LC_COLLATE and
 LC_CTYPE for the CREATE DATABASE command. Attached patch adds that.
 
 I doubt this is important enough to backpatch - thoughts?

It won't apply to current head, but otherwise I see no problem with it.

I have no opinion on backpatching it.
-- 
Vik


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade and toast tables bug discovered

2014-07-10 Thread Bruce Momjian
On Thu, Jul 10, 2014 at 11:11:19PM +0200, Andres Freund wrote:
 On 2014-07-10 16:33:40 -0400, Bruce Momjian wrote:
  On Thu, Jul 10, 2014 at 10:46:30AM -0400, Robert Haas wrote:
Agreed.  I am now thinking we could harness the code that already exists
to optionally add a TOAST table as part of ALTER TABLE ADD COLUMN.  We
would just need an entry point to call it from pg_upgrade, either via an
SQL command that checks (and hopefully doesn't do anything else), or a C
function that does it, e.g. VACUUM would be trivial to run on every
database, but I don't think it tests that;  is _could_ in binary_upgrade
mode.  However, the idea of having a C function plug into the guts of
the server and call internal functions makes me uncomforable.
   
   Well, pg_upgrade_support's charter is basically to provide access to
   the guts of the server in ways we wouldn't normally allow; all that
   next-OID stuff is basically exactly that.  So I don't think this is
   such a big deal.  It needs to be properly commented, of course.
  
  If you look at how oid assignment is handled, it is done in a very
  surgical way, i.e. pg_upgrade_support sets a global variable, and the
  variable triggers different behavior in a CREATE command.  This change
  would be far more invasive than that.
 
 Meh. It's only somewhat surgical because there's pg_upgrade specific
 code sprinkled in the backend at strategic places. That's the contrary
 of a clear abstraction barrier. And arguably a function call from a SQL
 C function has a much clearer abstraction barrier.

Well, we are going to need to call internal C functions, often bypassing
their typical call sites and the assumption about locking, etc.  Perhaps
this could be done from a plpgsql function.  We could add and drop a
dummy column to force TOAST table creation --- we would then only need a
way to detect if a function _needs_ a TOAST table, which was skipped in
binary upgrade mode previously.

That might be a minimalistic approach.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minmax indexes

2014-07-10 Thread Jeff Janes
On Thu, Jul 10, 2014 at 2:30 PM, Jaime Casanova ja...@2ndquadrant.com
wrote:

 On Thu, Jul 10, 2014 at 3:50 PM, Josh Berkus j...@agliodbs.com wrote:
  On 07/10/2014 12:20 PM, Alvaro Herrera wrote:
  So I guess the only thing left is to issue a NOTICE when said alter
   takes place (I don't see that on the patch, but maybe it's there?)
  That's not in the patch.  I don't think we have an appropriate place to
  emit such a notice.
 
  What do you mean by don't have an appropriate place?
 
  The suggestion is that when a user does:
 
  ALTER INDEX foo_minmax SET PAGES_PER_RANGE=100
 
  they should get a NOTICE:
 
  NOTICE: changes to pages per range will not take effect until the index
  is REINDEXed
 
  otherwise, we're going to get a lot of I Altered the pages per range,
  but performance didn't change emails.
 

 How is this different from ALTER TABLE foo SET (FILLFACTOR=80);  or
 from ALTER TABLE foo ALTER bar SET STORAGE EXTERNAL;  ?

 we don't get a notice for these cases either



I think those are different.  They don't rewrite existing data in the
table, but they are applied to new (and updated) data. My understanding is
that changing  PAGES_PER_RANGE will have no effect on future data until a
re-index is done, even if the entire table eventually turns over.

Cheers,

Jeff


Re: [HACKERS] Minmax indexes

2014-07-10 Thread Greg Stark
On Thu, Jul 10, 2014 at 10:29 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:

 What I think should happen is that if the value is changed, the index
 sholud be rebuilt right there.

I disagree. It would be a non-orthogonal interface if ALTER TABLE
sometimes causes the index to be rebuilt and sometimes just makes a
configuration change. I already see a lot of user confusion when some
ALTER TABLE commands rewrite the table and some are quick meta data
changes.

Especially in this case where the type of configuration being changed
is just an internal storage parameter and the user visible shape of
the index is unchanged it would be weird to rebuild the index.

IMHO the right thing to do is just to say this parameter is
read-only and have the AM throw an error when the user changes it. But
even that would require an AM callback for the AM to even know about
the change.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade and toast tables bug discovered

2014-07-10 Thread Bruce Momjian
On Thu, Jul 10, 2014 at 06:17:14PM -0400, Bruce Momjian wrote:
 Well, we are going to need to call internal C functions, often bypassing
 their typical call sites and the assumption about locking, etc.  Perhaps
 this could be done from a plpgsql function.  We could add and drop a
 dummy column to force TOAST table creation --- we would then only need a
 way to detect if a function _needs_ a TOAST table, which was skipped in
 binary upgrade mode previously.
 
 That might be a minimalistic approach.

I have thought some more on this.  I thought I would need to open
pg_class in C and do complex backend stuff, but I now realize I can do
it from libpq, and just call ALTER TABLE and I think that always
auto-checks if a TOAST table is needed.  All I have to do is query
pg_class from libpq, then construct ALTER TABLE commands for each item,
and it will optionally create the TOAST table if needed.  I just have to
use a no-op ALTER TABLE command, like SET STATISTICS.

I am in Asia the next two weeks but will work on it after I return.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Missing autocomplete for CREATE DATABASE

2014-07-10 Thread David Fetter
On Thu, Jul 10, 2014 at 08:32:01PM +0100, Magnus Hagander wrote:
 It seems psql is missing autocomplete entries for LC_COLLATE and
 LC_CTYPE for the CREATE DATABASE command. Attached patch adds that.
 
 I doubt this is important enough to backpatch - thoughts?

I don't see how it could hurt to fix this bug.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] IMPORT FOREIGN SCHEMA statement

2014-07-10 Thread Michael Paquier
On Fri, Jul 11, 2014 at 4:06 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 I wrote:
  So I propose we invent a couple more import options, say
  import_default and import_collate, and make the postgres_fdw
  code do the obvious thing with them.  import_default should
  probably default to false, but I'm about halfway tempted to
  say that import_collate should default to true --- if you're
  importing a collatable column and you don't have a matching
  locale locally, it seems like it'd be better if we complain
  about that by default.

 I've committed this patch with that addition and some minor additional
 cleanup.

Thanks!
-- 
Michael


Re: [HACKERS] Allowing NOT IN to use ANTI joins

2014-07-10 Thread Tom Lane
David Rowley dgrowle...@gmail.com writes:
 Attached is the updated version of the patch.

I spent some time fooling with this patch, cleaning up the join-alias
issue as well as more-cosmetic things.  However, while testing it
I realized that the whole thing is based on a false premise: to equate
a NOT IN with an antijoin, we'd have to prove not only that the inner
side is non-nullable, but that the outer comparison values are too.
Here's an example:

regression=# create table zt1 (f1 int);
CREATE TABLE
regression=# insert into zt1 values(1);
INSERT 0 1
regression=# insert into zt1 values(2);
INSERT 0 1
regression=# insert into zt1 values(null);
INSERT 0 1
regression=# create table zt2 (f1 int not null);
CREATE TABLE
regression=# insert into zt2 values(1);
INSERT 0 1

With the patch in place, we get

regression=# explain select * from zt1 where f1 not in (select f1 from zt2);
QUERY PLAN 
---
 Hash Anti Join  (cost=64.00..144.80 rows=1200 width=4)
   Hash Cond: (zt1.f1 = zt2.f1)
   -  Seq Scan on zt1  (cost=0.00..34.00 rows=2400 width=4)
   -  Hash  (cost=34.00..34.00 rows=2400 width=4)
 -  Seq Scan on zt2  (cost=0.00..34.00 rows=2400 width=4)
 Planning time: 0.646 ms
(6 rows)

regression=# select * from zt1 where f1 not in (select f1 from zt2);
 f1 

  2
   
(2 rows)

which is the wrong answer, as demonstrated by comparison to the result
without optimization:

regression=# alter table zt2 alter column f1 drop not null;
ALTER TABLE
regression=# explain select * from zt1 where f1 not in (select f1 from zt2);
  QUERY PLAN   
---
 Seq Scan on zt1  (cost=40.00..80.00 rows=1200 width=4)
   Filter: (NOT (hashed SubPlan 1))
   SubPlan 1
 -  Seq Scan on zt2  (cost=0.00..34.00 rows=2400 width=4)
 Planning time: 0.163 ms
(5 rows)

regression=# select * from zt1 where f1 not in (select f1 from zt2);
 f1 

  2
(1 row)

We could no doubt fix this by also insisting that the left-side vars
be provably not null, but that's going to make the patch even slower
and even less often applicable.  I'm feeling discouraged about whether
this is worth doing in this form.

For the archives' sake, I attach an updated version with the fixes
I'd applied so far.

regards, tom lane

diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c
index 3e7dc85..4b44662 100644
*** a/src/backend/optimizer/plan/subselect.c
--- b/src/backend/optimizer/plan/subselect.c
*** SS_process_ctes(PlannerInfo *root)
*** 1195,1205 
   * If so, form a JoinExpr and return it.  Return NULL if the SubLink cannot
   * be converted to a join.
   *
!  * The only non-obvious input parameter is available_rels: this is the set
!  * of query rels that can safely be referenced in the sublink expression.
!  * (We must restrict this to avoid changing the semantics when a sublink
!  * is present in an outer join's ON qual.)  The conversion must fail if
!  * the converted qual would reference any but these parent-query relids.
   *
   * On success, the returned JoinExpr has larg = NULL and rarg = the jointree
   * item representing the pulled-up subquery.  The caller must set larg to
--- 1195,1208 
   * If so, form a JoinExpr and return it.  Return NULL if the SubLink cannot
   * be converted to a join.
   *
!  * If under_not is true, the caller actually found NOT (ANY SubLink),
!  * so that what we must try to build is an ANTI not SEMI join.
!  *
!  * available_rels is the set of query rels that can safely be referenced
!  * in the sublink expression.  (We must restrict this to avoid changing the
!  * semantics when a sublink is present in an outer join's ON qual.)
!  * The conversion must fail if the converted qual would reference any but
!  * these parent-query relids.
   *
   * On success, the returned JoinExpr has larg = NULL and rarg = the jointree
   * item representing the pulled-up subquery.  The caller must set larg to
*** SS_process_ctes(PlannerInfo *root)
*** 1222,1228 
   */
  JoinExpr *
  convert_ANY_sublink_to_join(PlannerInfo *root, SubLink *sublink,
! 			Relids available_rels)
  {
  	JoinExpr   *result;
  	Query	   *parse = root-parse;
--- 1225,1231 
   */
  JoinExpr *
  convert_ANY_sublink_to_join(PlannerInfo *root, SubLink *sublink,
! 			bool under_not, Relids available_rels)
  {
  	JoinExpr   *result;
  	Query	   *parse = root-parse;
*** convert_ANY_sublink_to_join(PlannerInfo 
*** 1237,1242 
--- 1240,1254 
  	Assert(sublink-subLinkType == ANY_SUBLINK);
  
  	/*
+ 	 * Per SQL spec, NOT IN is not ordinarily equivalent to an anti-join, so
+ 	 * that by default we have to fail when under_not.  However, if we can
+ 	 * prove that the sub-select's output columns are all certainly not NULL,
+ 	 

Re: [HACKERS] Allowing NOT IN to use ANTI joins

2014-07-10 Thread Tom Lane
I wrote:
 We could no doubt fix this by also insisting that the left-side vars
 be provably not null, but that's going to make the patch even slower
 and even less often applicable.  I'm feeling discouraged about whether
 this is worth doing in this form.

Hm ... actually, there might be a better answer: what about transforming

   WHERE (x,y) NOT IN (SELECT provably-not-null-values FROM ...)

to

   WHERE antijoin condition AND x IS NOT NULL AND y IS NOT NULL

?

Of course this would require x/y not being volatile, but if they are,
we're not going to get far with optimizing the query anyhow.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] how many changes about backend mode from 7.2.2 to 8.4.0?

2014-07-10 Thread 土卜皿
hi, all
NOTE: fedora 17 x86_64, 7.2.2 can not be compiled in the env, I choose
8.4.0 because I think old version is easier to understand than newer
version!

I want to study pg's external sort (in tuplesort.c )through 8.4.0's source
code, and use ddd to do it, according to usingddd-postgres
http://www-inst.eecs.berkeley.edu/~cs186/fa04/usingddd.pdf 's description:

 In the backend mode, you don’t create a separate postmaster process and
don’t use psql at all. Instead, you start up postgres from the
command line and directly interact with it.

and after starting  postgres in ddd, a backend interactive interface works
as doc's Figure 5, and I found version 7.2.2's source postgres'c has
backend interactive interface, but my version is 8.4.0, which seemly has
not backend interactive interface, so I want to ask

(1) whether I must use another method to debug relative external sort code
(in tuplesort.c ), as some post said that I can execute psql, select select
pg_backend_pid(), and gdb pid  , any advice will be appreciated!

BEST REGARDS
Dillon Peng


Re: [HACKERS] how many changes about backend mode from 7.2.2 to 8.4.0?

2014-07-10 Thread Michael Paquier
On Fri, Jul 11, 2014 at 10:50 AM, 土卜皿 pengcz.n...@gmail.com wrote:
 NOTE: fedora 17 x86_64, 7.2.2 can not be compiled in the env, I choose 8.4.0
 because I think old version is easier to understand than newer version!
Are you aware that Postgres 7.2 has been released in 2002? It is EOL
(end-of-life) since 2005 by looking at the release notes.

 I want to study pg's external sort (in tuplesort.c )through 8.4.0's source
 code, and use ddd to do it, according to usingddd-postgres 's description:
If you are planning to do some development in the future with or for
Postgres, you would get a better insight by looking at more recent
code. Here are some guidelines for example to use git, which is really
helpful as a base infrastructure when studying the code:
https://wiki.postgresql.org/wiki/Working_with_GIT

Hope this helps.
Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] how many changes about backend mode from 7.2.2 to 8.4.0?

2014-07-10 Thread 土卜皿
hi,
  thanks a lot!


2014-07-11 10:10 GMT+08:00 Michael Paquier michael.paqu...@gmail.com:

 On Fri, Jul 11, 2014 at 10:50 AM, 土卜皿 pengcz.n...@gmail.com wrote:
  NOTE: fedora 17 x86_64, 7.2.2 can not be compiled in the env, I choose
 8.4.0
  because I think old version is easier to understand than newer version!
 Are you aware that Postgres 7.2 has been released in 2002? It is EOL
 (end-of-life) since 2005 by looking at the release notes.

I know the released time.



  I want to study pg's external sort (in tuplesort.c )through 8.4.0's
 source
  code, and use ddd to do it, according to usingddd-postgres 's
 description:
 If you are planning to do some development in the future with or for
 Postgres, you would get a better insight by looking at more recent
 code. Here are some guidelines for example to use git, which is really
 helpful as a base infrastructure when studying the code:
 https://wiki.postgresql.org/wiki/Working_with_GIT

 seemly the wiki you said has no information about debug

I am sorry, maybe I should make my question clearer, I only want to know,
in 8.4.0 or newer version,
 whether I can debug posgres in the bare backend mode (in  usingddd-postgres
http://www-inst.eecs.berkeley.edu/~cs186/fa04/usingddd.pdf, he said that 
There are two ways to debug postgres (a) in the interactive mode and (b) in
the bare backend mode
), if yes, what should I do?  thanks in advance!

BEST REGARDS
Dillon


Re: [HACKERS] how many changes about backend mode from 7.2.2 to 8.4.0?

2014-07-10 Thread Tom Lane
=?UTF-8?B?5Zyf5Y2c55q/?= pengcz.n...@gmail.com writes:
 I am sorry, maybe I should make my question clearer, I only want to know,
 in 8.4.0 or newer version,
  whether I can debug posgres in the bare backend mode (in  usingddd-postgres
 http://www-inst.eecs.berkeley.edu/~cs186/fa04/usingddd.pdf, he said that 
 There are two ways to debug postgres (a) in the interactive mode and (b) in
 the bare backend mode

Yeah, that still works as well or poorly as it ever did, though the
startup process is a bit different: you need to say postgres --single
to launch a standalone backend.

But really, nobody does it that way anymore.  There is no advantage to it,
unless maybe you're trying to debug a startup-time crash.  The standalone
backend interface is very unpleasant to use compared to psql: no readline
command editing, no nice formatting of output, no tab completion or help,
etc etc.  For debugging of normal query operation it's far better to fire
up a psql session and then gdb the connected backend process.

There's more info here:
https://wiki.postgresql.org/wiki/Developer_FAQ#gdb

and if gdb isn't giving you useful symbolic information, see
the setup tips here:
https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] how many changes about backend mode from 7.2.2 to 8.4.0?

2014-07-10 Thread 土卜皿
hi, tom
  thank you very much! this is what I wanted!

Dillon


2014-07-11 10:59 GMT+08:00 Tom Lane t...@sss.pgh.pa.us:

 =?UTF-8?B?5Zyf5Y2c55q/?= pengcz.n...@gmail.com writes:
  I am sorry, maybe I should make my question clearer, I only want to know,
  in 8.4.0 or newer version,
   whether I can debug posgres in the bare backend mode (in
  usingddd-postgres
  http://www-inst.eecs.berkeley.edu/~cs186/fa04/usingddd.pdf, he said
 that 
  There are two ways to debug postgres (a) in the interactive mode and (b)
 in
  the bare backend mode

 Yeah, that still works as well or poorly as it ever did, though the
 startup process is a bit different: you need to say postgres --single
 to launch a standalone backend.

 But really, nobody does it that way anymore.  There is no advantage to it,
 unless maybe you're trying to debug a startup-time crash.  The standalone
 backend interface is very unpleasant to use compared to psql: no readline
 command editing, no nice formatting of output, no tab completion or help,
 etc etc.  For debugging of normal query operation it's far better to fire
 up a psql session and then gdb the connected backend process.

 There's more info here:
 https://wiki.postgresql.org/wiki/Developer_FAQ#gdb

 and if gdb isn't giving you useful symbolic information, see
 the setup tips here:

 https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD

 regards, tom lane



Re: [HACKERS] Pg_upgrade and toast tables bug discovered

2014-07-10 Thread Bruce Momjian
On Thu, Jul 10, 2014 at 06:38:26PM -0400, Bruce Momjian wrote:
 On Thu, Jul 10, 2014 at 06:17:14PM -0400, Bruce Momjian wrote:
  Well, we are going to need to call internal C functions, often bypassing
  their typical call sites and the assumption about locking, etc.  Perhaps
  this could be done from a plpgsql function.  We could add and drop a
  dummy column to force TOAST table creation --- we would then only need a
  way to detect if a function _needs_ a TOAST table, which was skipped in
  binary upgrade mode previously.
  
  That might be a minimalistic approach.
 
 I have thought some more on this.  I thought I would need to open
 pg_class in C and do complex backend stuff, but I now realize I can do
 it from libpq, and just call ALTER TABLE and I think that always
 auto-checks if a TOAST table is needed.  All I have to do is query
 pg_class from libpq, then construct ALTER TABLE commands for each item,
 and it will optionally create the TOAST table if needed.  I just have to
 use a no-op ALTER TABLE command, like SET STATISTICS.
 
 I am in Asia the next two weeks but will work on it after I return.

Attached is the backend part of the patch.  I will work on the
pg_upgrade/libpq/ALTER TABLE part later.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c
new file mode 100644
index bdfeb90..6e8ef36
*** a/src/backend/catalog/toasting.c
--- b/src/backend/catalog/toasting.c
*** create_toast_table(Relation rel, Oid toa
*** 165,180 
  	if (rel-rd_rel-reltoastrelid != InvalidOid)
  		return false;
  
! 	/*
! 	 * Check to see whether the table actually needs a TOAST table.
! 	 *
! 	 * If an update-in-place toast relfilenode is specified, force toast file
! 	 * creation even if it seems not to need one.
! 	 */
! 	if (!needs_toast_table(rel) 
! 		(!IsBinaryUpgrade ||
! 		 !OidIsValid(binary_upgrade_next_toast_pg_class_oid)))
! 		return false;
  
  	/*
  	 * If requested check lockmode is sufficient. This is a cross check in
--- 165,211 
  	if (rel-rd_rel-reltoastrelid != InvalidOid)
  		return false;
  
! 	if (!IsBinaryUpgrade)
! 	{
! 		if (!needs_toast_table(rel))
! 			return false;
! 	}
! 	else
! 	{
! 		/*
! 		 * Check to see whether the table needs a TOAST table.
! 		 *
! 		 * If an update-in-place TOAST relfilenode is specified, force TOAST file
! 		 * creation even if it seems not to need one.  This handles the case
! 		 * where the old cluster needed a TOAST table but the new cluster
! 		 * would not normally create one.
! 		 */
! 		 
! 		/*
! 		 * If a TOAST oid is not specified, skip TOAST creation as we will do
! 		 * it later so we don't create a TOAST table whose OID later conflicts
! 		 * with a user-supplied OID.  This handles cases where the old cluster
! 		 * didn't need a TOAST table, but the new cluster does.
! 		 */
! 		if (!OidIsValid(binary_upgrade_next_toast_pg_class_oid))
! 			return false;
! 
! 		/*
! 		 * If a special TOAST value has been passed in, it means we are in
! 		 * cleanup mode --- we are creating needed TOAST tables after all user
! 		 * tables with specified OIDs have been created.  We let the system
! 		 * assign a TOAST oid for us.  The tables are empty so the missing
! 		 * TOAST tables were not a problem.
! 		 */
! 		if (binary_upgrade_next_toast_pg_class_oid == OPTIONALLY_CREATE_TOAST_OID)
! 		{
! 			/* clear it so it is no longer used */
! 			binary_upgrade_next_toast_pg_class_oid = InvalidOid;
! 
! 			if (!needs_toast_table(rel))
! return false;
! 		}
! 	}
  
  	/*
  	 * If requested check lockmode is sufficient. This is a cross check in
diff --git a/src/include/catalog/binary_upgrade.h b/src/include/catalog/binary_upgrade.h
new file mode 100644
index f39017c..7aff509
*** a/src/include/catalog/binary_upgrade.h
--- b/src/include/catalog/binary_upgrade.h
***
*** 14,19 
--- 14,22 
  #ifndef BINARY_UPGRADE_H
  #define BINARY_UPGRADE_H
  
+ /* pick a OID that will never be normally used for TOAST table */
+ #define OPTIONALLY_CREATE_TOAST_OID	1
+ 
  extern PGDLLIMPORT Oid binary_upgrade_next_pg_type_oid;
  extern PGDLLIMPORT Oid binary_upgrade_next_array_pg_type_oid;
  extern PGDLLIMPORT Oid binary_upgrade_next_toast_pg_type_oid;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Allow multi-byte characters as escape in SIMILAR TO and SUBSTRING

2014-07-10 Thread Jeff Davis
Attached is a small patch to $SUBJECT.

In master, only single-byte characters are allowed as an escape. Of
course, with the patch it must still be a single character, but it may
be multi-byte.

Regards,
Jeff Davis

*** a/src/backend/utils/adt/regexp.c
--- b/src/backend/utils/adt/regexp.c
***
*** 688,698  similar_escape(PG_FUNCTION_ARGS)
  		elen = VARSIZE_ANY_EXHDR(esc_text);
  		if (elen == 0)
  			e = NULL;			/* no escape character */
! 		else if (elen != 1)
! 			ereport(ERROR,
! 	(errcode(ERRCODE_INVALID_ESCAPE_SEQUENCE),
! 	 errmsg(invalid escape string),
!   errhint(Escape string must be empty or one character.)));
  	}
  
  	/*--
--- 688,704 
  		elen = VARSIZE_ANY_EXHDR(esc_text);
  		if (elen == 0)
  			e = NULL;			/* no escape character */
! 		else
! 		{
! 			int escape_mblen = pg_verify_mbstr_len(GetDatabaseEncoding(), e,
!    elen, false);
! 
! 			if (escape_mblen  1)
! ereport(ERROR,
! 		(errcode(ERRCODE_INVALID_ESCAPE_SEQUENCE),
! 		 errmsg(invalid escape string),
! 		 errhint(Escape string must be empty or one character.)));
! 		}
  	}
  
  	/*--
***
*** 722,781  similar_escape(PG_FUNCTION_ARGS)
  
  	while (plen  0)
  	{
! 		char		pchar = *p;
  
! 		if (afterescape)
  		{
! 			if (pchar == ''  !incharclass)	/* for SUBSTRING patterns */
! *r++ = ((nquotes++ % 2) == 0) ? '(' : ')';
! 			else
  			{
  *r++ = '\\';
  *r++ = pchar;
  			}
! 			afterescape = false;
! 		}
! 		else if (e  pchar == *e)
! 		{
! 			/* SQL99 escape character; do not send to output */
! 			afterescape = true;
  		}
! 		else if (incharclass)
  		{
! 			if (pchar == '\\')
  *r++ = '\\';
! 			*r++ = pchar;
! 			if (pchar == ']')
! incharclass = false;
! 		}
! 		else if (pchar == '[')
! 		{
! 			*r++ = pchar;
! 			incharclass = true;
! 		}
! 		else if (pchar == '%')
! 		{
! 			*r++ = '.';
! 			*r++ = '*';
! 		}
! 		else if (pchar == '_')
! 			*r++ = '.';
! 		else if (pchar == '(')
! 		{
! 			/* convert to non-capturing parenthesis */
! 			*r++ = '(';
! 			*r++ = '?';
! 			*r++ = ':';
! 		}
! 		else if (pchar == '\\' || pchar == '.' ||
!  pchar == '^' || pchar == '$')
! 		{
! 			*r++ = '\\';
! 			*r++ = pchar;
  		}
- 		else
- 			*r++ = pchar;
- 		p++, plen--;
  	}
  
  	*r++ = ')';
--- 728,816 
  
  	while (plen  0)
  	{
! 		char	pchar = *p;
! 		int		mblen = pg_encoding_verifymb(GetDatabaseEncoding(), p, plen);
! 
! 		Assert(mblen  0);
  
! 		if (mblen == 1)
  		{
! 			if (afterescape)
! 			{
! if (pchar == ''  !incharclass)	/* for SUBSTRING patterns */
! 	*r++ = ((nquotes++ % 2) == 0) ? '(' : ')';
! else
! {
! 	*r++ = '\\';
! 	*r++ = pchar;
! }
! afterescape = false;
! 			}
! 			else if (e  pchar == *e)
! 			{
! /* SQL99 escape character; do not send to output */
! afterescape = true;
! 			}
! 			else if (incharclass)
! 			{
! if (pchar == '\\')
! 	*r++ = '\\';
! *r++ = pchar;
! if (pchar == ']')
! 	incharclass = false;
! 			}
! 			else if (pchar == '[')
! 			{
! *r++ = pchar;
! incharclass = true;
! 			}
! 			else if (pchar == '%')
! 			{
! *r++ = '.';
! *r++ = '*';
! 			}
! 			else if (pchar == '_')
! *r++ = '.';
! 			else if (pchar == '(')
! 			{
! /* convert to non-capturing parenthesis */
! *r++ = '(';
! *r++ = '?';
! *r++ = ':';
! 			}
! 			else if (pchar == '\\' || pchar == '.' ||
! 	 pchar == '^' || pchar == '$')
  			{
  *r++ = '\\';
  *r++ = pchar;
  			}
! 			else
! *r++ = pchar;
! 			p++, plen--;
  		}
! 		else
  		{
! 			if (afterescape)
! 			{
  *r++ = '\\';
! memcpy(r, p, mblen);
! r += mblen;
! afterescape = false;
! 			}
! 			else if (e  elen == mblen  memcmp(e, p, mblen) == 0)
! 			{
! /* SQL99 escape character; do not send to output */
! afterescape = true;
! 			}
! 			else
! 			{
! memcpy(r, p, mblen);
! r += mblen;
! 			}
! 
! 			p += mblen;
! 			plen -= mblen;
  		}
  	}
  
  	*r++ = ')';

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pg_upgrade and toast tables bug discovered

2014-07-10 Thread Alvaro Herrera
Bruce Momjian wrote:
 On Thu, Jul 10, 2014 at 06:38:26PM -0400, Bruce Momjian wrote:

  I have thought some more on this.  I thought I would need to open
  pg_class in C and do complex backend stuff, but I now realize I can do
  it from libpq, and just call ALTER TABLE and I think that always
  auto-checks if a TOAST table is needed.  All I have to do is query
  pg_class from libpq, then construct ALTER TABLE commands for each item,
  and it will optionally create the TOAST table if needed.  I just have to
  use a no-op ALTER TABLE command, like SET STATISTICS.
 
 Attached is the backend part of the patch.  I will work on the
 pg_upgrade/libpq/ALTER TABLE part later.

Uh, why does this need to be in ALTER TABLE?  Can't this be part of
table creation done by pg_dump?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow multi-byte characters as escape in SIMILAR TO and SUBSTRING

2014-07-10 Thread Fujii Masao
On Fri, Jul 11, 2014 at 12:49 PM, Jeff Davis pg...@j-davis.com wrote:
 Attached is a small patch to $SUBJECT.

 In master, only single-byte characters are allowed as an escape. Of
 course, with the patch it must still be a single character, but it may
 be multi-byte.

+1

Probably you know that, multi-byte character is already allowed
as escape in LIKE. There seems no reason why we don't support
multi-byte escape character in SIMILAR TO and SUBSTRING.

Could you add the patch into next CF?

The patch doesn't contain the change of the document. But I think that
it's better to document what character is allowed as escape in LIKE,
SIMILAR TO and SUBSTRING.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers