Re: [HACKERS] Proposal - Support for National Characters functionality

2013-07-05 Thread Arulappan, Arul Shaji
Ishii san,

Thank you for your positive and early response.

 -Original Message-
 From: Tatsuo Ishii [mailto:is...@postgresql.org]
 Sent: Friday, 5 July 2013 3:02 PM
 To: Arulappan, Arul Shaji
 Cc: pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] Proposal - Support for National Characters
 functionality
 
 Arul Shaji,
 
 NCHAR support is on our TODO list for some time and I would like to
welcome
 efforts trying to implement it. However I have a few
 questions:
 
  This is a proposal to implement functionalities for the handling of
  National Characters.
 
  [Introduction]
 
  The aim of this proposal is to eventually have a way to represent
  'National Characters' in a uniform way, even in non-UTF8 encoded
  databases. Many of our customers in the Asian region who are now, as
  part of their platform modernization, are moving away from
mainframes
  where they have used National Characters representation in COBOL and
  other databases. Having stronger support for national characters
  representation will also make it easier for these customers to look
at
  PostgreSQL more favourably when migrating from other well known
RDBMSs
  who all have varying degrees of NCHAR/NVARCHAR support.
 
  [Specifications]
 
  Broadly speaking, the national characters implementation ideally
will
  include the following
  - Support for NCHAR/NVARCHAR data types
  - Representing NCHAR and NVARCHAR columns in UTF-8 encoding in
  non-UTF8 databases
 
 I think this is not a trivial work because we do not have framework to
allow
 mixed encodings in a database. I'm interested in how you are going to
solve
 the problem.
 

I would be lying if I said I have the design already speced out. I will
be working on this in the coming weeks and hope to design a working
solution consulting with the community.

  - Support for UTF16 column encoding and representing NCHAR and
  NVARCHAR columns in UTF16 encoding in all databases.
 
 Why do yo need UTF-16 as the database encoding? UTF-8 is already
supported,
 and any UTF-16 character can be represented in UTF-8 as far as I know.
 

Yes, that's correct. However there are advantages in using UTF-16
encoding for those characters that are always going to take atleast
two-bytes to represent. 

Having said that, my intention is to use UTF-8 for NCHAR as well.
Supporting UTF-16 will be even more complicated as it is not supported
natively in some Linux platforms. I only included it to give an option.

  - Support for NATIONAL_CHARACTER_SET GUC variable that will
determine
  the encoding that will be used in NCHAR/NVARCHAR columns.
 
 You said NCHAR's encoding is UTF-8. Why do you need the GUC if NCHAR's
 encoding is fixed to UTF-8?
 

If we are going to only support UTF-8 for NCHAR, then we don't need the
GUC variable obviously.

Rgds,
Arul Shaji



  The above points are at the moment a 'wishlist' only. Our aim is to
  tackle them one-by-one as we progress. I will send a detailed
proposal
  later with more technical details.
 
  The main aim at the moment is to get some feedback on the above to
  know if this feature is something that would benefit PostgreSQL in
  general, and if users maintaining DBs in non-English speaking
regions
  will find this beneficial.
 
  Rgds,
  Arul Shaji
 
 
 
  P.S.: It has been quite some time since I send a correspondence to
  this list. Our mail server adds a standard legal disclaimer to all
  outgoing mails, which I know that this list is not a huge fan of. I
  used to have an exemption for the mails I send to this list. If the
  disclaimer appears, apologies in advance. I will rectify that on the
next
 one.
 --
 Tatsuo Ishii
 SRA OSS, Inc. Japan
 English: http://www.sraoss.co.jp/index_en.php
 Japanese: http://www.sraoss.co.jp




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal - Support for National Characters functionality

2013-07-05 Thread Arulappan, Arul Shaji


 -Original Message-
 From: Claudio Freire [mailto:klaussfre...@gmail.com]
 Sent: Friday, 5 July 2013 3:41 PM
 To: Tatsuo Ishii
 Cc: Arulappan, Arul Shaji; PostgreSQL-Dev
 Subject: Re: [HACKERS] Proposal - Support for National Characters
 functionality
 
 On Fri, Jul 5, 2013 at 2:02 AM, Tatsuo Ishii is...@postgresql.org
wrote:
  - Support for NATIONAL_CHARACTER_SET GUC variable that will
determine
  the encoding that will be used in NCHAR/NVARCHAR columns.
 
  You said NCHAR's encoding is UTF-8. Why do you need the GUC if
NCHAR's
  encoding is fixed to UTF-8?
 
 
 Not only that, but I don't think it can be a GUC. Maybe a compile-time
switch,
 but if it were a GUC, how do you handle an existing database in UTF-8
when the
 setting is switched to UTF-16? Re-encode everything?
 Store the encoding along each value? It's a mess.
 
 Either fix it at UTF-8, or make it a compile-time thing, I'd say.

Agreed, that to begin with we only support UTF-8 encoding for NCHAR
columns. If that is the case, do we still need a compile time option to
turn on/off NCHAR functionality ? ?

Rgds,
Arul Shaji




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fallocate / posix_fallocate for new WAL file creation (etc...)

2013-07-05 Thread Jeff Davis
On Mon, 2013-07-01 at 00:52 -0400, Greg Smith wrote:
 On 6/30/13 9:28 PM, Jon Nelson wrote:
  The performance of the latter (new) test sometimes seems to perform
  worse and sometimes seems to perform better (usually worse) than
  either of the other two. In all cases, posix_fallocate performs
  better, but I don't have a sufficiently old kernel to test with.
 
 This updated test program looks reliable now.  The numbers are very 
 tight when I'd expect them to be, and there's nowhere with the huge 
 differences I saw in the earlier test program.
 
 Here's results from a few sets of popular older platforms:

...

 The win for posix_fallocate is there in most cases, but it's pretty hard 
 to see in these older systems.  That could be OK.  As long as the 
 difference is no more than noise, and that is the case, this could be 
 good enough to commit.  If there are significantly better results on the 
 new platforms, the old ones need to just not get worse.

I ran my own little test on my workstation[1] with the attached
programs. One does what we do now, another mimics the glibc emulation
you described earlier, and another uses posix_fallocate(). It does an
allocation phase, an fsync, a single rewrite, and then another fsync.
The program runs this 64 times for 64 different 16MB files.

write1 and write2 are almost exactly even at 25.5s. write3 is about
14.5s, which is a pretty big improvement.

So, my simple conclusion is that glibc emulation should be about the
same as what we're doing now, so there's no reason to avoid it. That
means, if posix_fallocate() is present, we should use it, because it's
either the same (if emulated in glibc) or significantly faster (if
implemented in the kernel).

If that is your conclusion, as well, it looks like this patch is about
ready for commit. What do you think?

Regards,
Jeff Davis

[1] workstation specs (ubuntu 12.10, ext4):
$ uname -a
Linux jdavis 3.5.0-34-generic #55-Ubuntu SMP Thu Jun 6 20:18:19 UTC 2013
x86_64 x86_64 x86_64 GNU/Linux


#include fcntl.h
#include stdio.h
#include unistd.h

char buf[8192] = {0};

int main()
{
	int i;
	int filenum;
	char filename[64];

	for (filenum = 0; filenum  64; filenum++)
		{
			sprintf(filename, /tmp/afile.%02d, filenum);
			int fd = open(filename, O_CREAT | O_EXCL | O_WRONLY, 0600);

			// allocate all 16MB
			for(i = 0; i  2048; i++)
{
	write(fd, buf, 8192);
}

			fsync(fd);

			lseek(fd, 0, SEEK_SET);

			// now overwrite it
			for(i = 0; i  2048; i++)
{
	write(fd, buf, 8192);
}

			fsync(fd);

			close(fd);
		}

	return 0;
}

#include fcntl.h
#include stdio.h
#include unistd.h

char buf[8192] = {0};

int main()
{
	int i;
	int filenum;
	char filename[64];

	for (filenum = 0; filenum  64; filenum++)
		{
			sprintf(filename, /tmp/afile.%02d, filenum);
			int fd = open(filename, O_CREAT | O_EXCL | O_WRONLY, 0600);

			// allocate all 16MB
			posix_fallocate(fd, 0, 16 * 1024 * 1024);

			fsync(fd);

			lseek(fd, 0, SEEK_SET);

			// now overwrite it
			for(i = 0; i  2048; i++)
{
	write(fd, buf, 8192);
}

			fsync(fd);
			close(fd);
		}

	return 0;
}

#include fcntl.h
#include stdio.h
#include unistd.h

char buf[8192] = {0};

int main()
{
	int i;
	int filenum;
	char filename[64];

	for (filenum = 0; filenum  64; filenum++)
		{
			sprintf(filename, /tmp/afile.%02d, filenum);
			int fd = open(filename, O_CREAT | O_EXCL | O_WRONLY, 0600);

			// allocate all 16MB
			for(i = 0; i  4096; i++)
{
	pwrite(fd, \0, 4, 4096*(i+1) - 1);
}

			fsync(fd);

			lseek(fd, 0, SEEK_SET);

			// now overwrite it
			for(i = 0; i  2048; i++)
{
	write(fd, buf, 8192);
}

			fsync(fd);

			close(fd);
		}

	return 0;
}

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WITH CHECK OPTION for auto-updatable views

2013-07-05 Thread Pavel Stehule
Hello

just some notes:

* autocomplete for INSERT, UPDATE, DELETE should to show updatable  views too

* can you explain better in doc differences between WITH CASCADED or
WITH LOCAL OPTION - assign some simple example to doc, please

* is possible to better identify (describe) failed constraints?

postgres=# create view v1 as select * from bubu where a  0;
CREATE VIEW
postgres=# create view v2 as select * from v1 where a  10 with check option;
CREATE VIEW
postgres=# insert into v1 values(-10);
INSERT 0 1
postgres=# insert into v2 values(-10);
ERROR:  new row violates WITH CHECK OPTION for view v2 --- but this
constraint is related to v1
DETAIL:  Failing row contains (-10).

* I found a difference against MySQL - LOCAL option ignore all other constraints

postgres=# CREATE TABLE t1 (a INT);
CREATE TABLE
postgres=# CREATE VIEW v1 AS SELECT * FROM t1 WHERE a  2 WITH CHECK OPTION;
CREATE VIEW
postgres=# CREATE VIEW v2 AS SELECT * FROM v1 WHERE a  0 WITH LOCAL
CHECK OPTION;
CREATE VIEW
postgres=# INSERT INTO v2 VALUES (2);
ERROR:  new row violates WITH CHECK OPTION for view v1 -- it will be
ok on MySQL
DETAIL:  Failing row contains (2).

Probably MySQL is wrong (due differet behave than in DB2) -- but who
know http://bugs.mysql.com/bug.php?id=6404

What is a correct behave?

Regards

Pavel


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fallocate / posix_fallocate for new WAL file creation (etc...)

2013-07-05 Thread Greg Smith

On 7/5/13 2:50 AM, Jeff Davis wrote:

So, my simple conclusion is that glibc emulation should be about the
same as what we're doing now, so there's no reason to avoid it. That
means, if posix_fallocate() is present, we should use it, because it's
either the same (if emulated in glibc) or significantly faster (if
implemented in the kernel).


That's what I'm seeing everywhere too.  I'm happy that we've spent 
enough time chasing after potential issues without finding anything now. 
 Pull out the GUC that was added for default and this is ready to commit.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improvement of checkpoint IO scheduler for stable transaction responses

2013-07-05 Thread KONDO Mitsumasa

(2013/07/05 0:35), Joshua D. Drake wrote:

On 07/04/2013 06:05 AM, Andres Freund wrote:

Presumably the smaller segsize is better because we don't
completely stall the system by submitting up to 1GB of io at once. So,
if we were to do it in 32MB chunks and then do a final fsync()
afterwards we might get most of the benefits.

Yes, I try to test this setting './configure --with-segsize=0.03125' tonight.
I will send you this test result tomorrow.




I did testing on this a few years ago, I tried with 2MB segments over 16MB
thinking similarly to you. It failed miserably, performance completely tanked.
Just as you say, test result was miserable... Too small segsize is bad for 
parformance. It might be improved by separate derectory, but too many FD with 
open() and close() seem to be bad. However, I think taht this implementation have 
potential which is improve for IO performance, so we need to try to test with 
some methods.


* Performance result in DBT-2 (WH340)
 | NOTPM90%tileAverage  Maximum
 +---
 original_0.7 (baseline) | 3474.62  18.348328  5.73936.977713
 fsync + write   | 3586.85  14.459486  4.96027.266958
 fsync + write + segsize=0.25| 3661.17  8.288164.11717.23191
 fsync + wrote + segsize=0.03125 | 3309.99  10.851245  6.75919.500598


(2013/07/04 22:05), Andres Freund wrote:
 1) it breaks pg_upgrade. Which means many of the bigger users won't be
 able to migrate to this and most packagers would carry the old
 segsize around forever.
 Even if we could get pg_upgrade to split files accordingly link mode
 would still be broken.
I think that pg_upgrade is one of the contrib, but not mainly implimentation of 
Postgres. So contrib should not try to stand in improvement of main 
implimentaion. Pg_upgrade users might consider same opinion.


 2) It drastically increases the amount of file handles neccessary and by
 extension increases the amount of open/close calls. Those aren't all
 that cheap. And it increases metadata traffic since mtime/atime are
 kept for more files. Also, file creation is rather expensive since it
 requires metadata transaction on the filesystem level.
My test result was seemed this problem. But my test wasn't separate directory in 
base/. I'm not sure that which way is best. If you have time to create patch, 
please send us, and I try to test in DBT-2.


Best regards,
--
Mitsumasa KONDO
NTT Open Sorce Software Center


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Block write statistics WIP

2013-07-05 Thread Greg Smith

On 7/1/13 3:10 AM, Satoshi Nagayasu wrote:

Or should we add some pointer, which is accociated with the Relation,
into BufferDesc? Maybe OID?


That is the other option here, I looked at it but didn't like it.  The 
problem is that at the point when a new page is created, it's not always 
clear yet what relation it's going to hold.  That means that if the 
buffer page itself is where you look up the relation OID, every code 
path that manipulates relation IDs will need to worry about maintaining 
this information.  It's possible to do it that way, but I don't know 
that it's any less work than making all the write paths keep track of 
it.  It would need some extra locking around updating the relation tag 
in the buffer pages too, and I'd prefer not to


The other thing that I like about the writing side is that I can 
guarantee the code is correct pretty easily.  Yes, it's going to take 
days worth of modifying writing code.  But the compile will force you to 
fix all of them, and once they're all updated I'd be surprised if 
something unexpected happened.


If instead the data moves onto the buffer page header instead, we could 
be chasing bugs similar to the relcache ones forever.  Also, as a tie 
breaker, buffer pages will get bigger and require more locking this way. 
 Making writers tell us the relation doesn't need any of that.



I'm thinking of FlushBuffer() too. How can we count physical write
for each relation in FlushBuffer()? Or any other appropriate place?


SMgrRelation is available there, so tagging the relation should be easy 
in this case.



BTW, Greg's first patch could not be applied to the latest HEAD.
I guess it need to have some fix, I couldn't clafiry it though.


Since this is a WIP patch, what I was looking for was general design 
approach feedback, with my bit as a simple example.  I didn't expect 
anyone to spend much time doing a formal review of that quick hack. 
You're welcome to try and play with that code to add things, I'm not 
very attached to it yet.  I've basically gotten what I wanted to here 
from this submission; I'm marking it returned with feedback.  If anyone 
else has comments, I'm still open to discussion here too.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.3 bug fix] ECPG does not escape backslashes

2013-07-05 Thread Michael Meskes
On Wed, Jul 03, 2013 at 07:22:48PM +0900, MauMau wrote:
 not specific to 9.3.  Could you commit and backport this?

Committed to 8.4, 9.0, 9.1, 9.2 and HEAD.

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at gmail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WITH CHECK OPTION for auto-updatable views

2013-07-05 Thread Dean Rasheed
On 5 July 2013 07:02, Pavel Stehule pavel.steh...@gmail.com wrote:
 Hello

 I try to check this patch

 I have a problem with initdb after patching

 error

 initializing dependencies ... ok
 creating system views ... FATAL:  WITH CHECK OPTION is supported only
 on auto-updatable views
 STATEMENT:  /*


 I found missing initialization (strange, gcc doesn't raise warnings :( )

 +   boolcheck_option;
 +   boolsecurity_barrier;


Ah, good catch. I was being careless there.

It turns out that although I compile with -Wall which implies
-Wuninitialized and -Wmaybe-uninitialized, those warnings are only
supported in optimised builds, which is why I didn't see it. So I've
learned something new today: always do an optimised build as well
during development.

Will fix. Thanks.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.3 bug fix] ECPG does not escape backslashes

2013-07-05 Thread MauMau

From: Michael Meskes mes...@postgresql.org
So that means MauMau was right and backslashes have to be escaped in 
filenames

in #line directives, right? Apparently my examples were badly chosen as I
didn't see an error no matter how many backslashes I had.


Yes, the below examples shows the case:

[maumau@myhost ~]$ touch ab\\c/a.pgc
[maumau@myhost ~]$ ecpg ab\\c/a.pgc
[maumau@myhost ~]$ gcc -c -I/tuna/pgsql/include ab\\c/a.c
ab\c/a.c:8:9: warning: unknown escape sequence '\c'
[maumau@myhost ~]$



Committed to 8.4, 9.0, 9.1, 9.2 and HEAD.


Thank you very much for your quick support.

Regards
MauMau



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches git tree

2013-07-05 Thread Andres Freund
On 2013-06-27 21:52:03 -0700, Kevin Grittner wrote:
 Andres Freund and...@2ndquadrant.com wrote:
 
  Hm. There were some issues with the test_logical_decoding
  Makefile not cleaning up the regression installation properly.
  Which might have caused the issue.
 
  Could you try after applying the patches and executing a clean
  and then rebuild?
 
 Tried, and problem persists.
 
  Otherwise, could you try applying my git tree so we are sure we
  test the same thing?
 
  $ git remote add af 
  git://git.postgresql.org/git/users/andresfreund/postgres.git
  $ git fetch af
  $ git checkout -b xlog-decoding af/xlog-decoding-rebasing-cf4
  $ ./configure ...
  $ make
 
 Tried that, too, and problem persists.  The log shows the last
 commit on your branch as 022c2da1873de2fbc93ae524819932719ca41bdb.

Ok. I think I have a slight idea what's going on. Could you check
whether recompiling with -O0 fixes the issue?

There's something strange going on here, not sure whether it's just a
bug that's hidden, by either not doing optimizations or by adding more
elog()s, or wheter it's a compiler bug.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal - Support for National Characters functionality

2013-07-05 Thread Andrew Dunstan


On 07/05/2013 02:12 AM, Arulappan, Arul Shaji wrote:

- Support for UTF16 column encoding and representing NCHAR and
NVARCHAR columns in UTF16 encoding in all databases.

Why do yo need UTF-16 as the database encoding? UTF-8 is already

supported,

and any UTF-16 character can be represented in UTF-8 as far as I know.


Yes, that's correct. However there are advantages in using UTF-16
encoding for those characters that are always going to take atleast
two-bytes to represent.





Any suggestion to store data in utf-16 is likely to be a complete 
non-starter. I suggest you research our previously stated requirements 
for server side encodings.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.3 bug fix] ECPG does not escape backslashes

2013-07-05 Thread Andrew Dunstan


On 07/05/2013 05:16 AM, Michael Meskes wrote:

On Wed, Jul 03, 2013 at 07:22:48PM +0900, MauMau wrote:

not specific to 9.3.  Could you commit and backport this?

Committed to 8.4, 9.0, 9.1, 9.2 and HEAD.



This looks incomplete. Surely just escaping backslashes alone is not 
enough. I suspect at least the  char and any chars below 0x20 should be 
quoted also.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] LDAP: bugfix and deprecated OpenLDAP API

2013-07-05 Thread Magnus Hagander
On Mon, Jul 1, 2013 at 4:18 PM, Albe Laurenz laurenz.a...@wien.gv.at wrote:
 Peter Eisentraut wrote:
 Btw., I just checked the source code of Apache, PHP, and PAM, and they
 are all unconditionally building with LDAP_DEPRECATED.  So maybe there
 is no hurry about this.

 I don't think that the old API functions will go away until there
 is a new standard for the LDAP C API, but I have no inside knowledge
 of OpenLDAP.

Based on this, I thikn it's reasonable to not worry about
LDAP_DEPRECATED at this point, and only take that plunge when we
actually *need* some of that functionality.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] LDAP: bugfix and deprecated OpenLDAP API

2013-07-05 Thread Magnus Hagander
On Mon, Jul 1, 2013 at 4:16 PM, Albe Laurenz laurenz.a...@wien.gv.at wrote:
 Magnus Hagander wrote:
 On Tue, Feb 5, 2013 at 10:39 AM, Albe Laurenz laurenz.a...@wien.gv.at 
 wrote:
 I found a small bug in the implementation of LDAP connection
 parameter lookup.

 [...]

 As coded now, the timeout won't work - if the LDAP server
 is down, ldap_simple_bind will wait for the network
 timeout, which will be quite longer than 2 seconds.

 The attached patch ldap-bug.patch fixes this problem;
 unfortunately I found no way that works both with OpenLDAP
 and Windows LDAP, so I had to add an #ifdef.

 I think that this patch should be applied and backpatched.

 So just to be clear - the difference is we're going from implicit
 anonymous bind, to an explicit one? We're not actually causing an
 extra bind compared to previous versions?

 No, it was an explicit bind before as well.

Ah, got it.

In that case, doesn't this patch break Windows? We no longer do the
anonymous bind on Windows, since it's now in the #ifdef HAVE_LIBLDAP.

Don't we need to keep the ldap_simple_bind() call in the Windows case,
or break it up so the call to ldap_sasl_bind_s() is moved outside the
#ifdef? At least I can't find anything in the docs that indicate that
ldap_connect() on Windows would actually call that for us - only the
other way around?


 I'll be on vacation from Wednesday on until July 20th.

Sorry I couldn't get back to you on that one earlier.

I'm going to set this patch as returned with feedback for now, but
please feel free to comment on above and possibly resubmit if
necessary before the CF and I'll see if I can deal with it before the
next CF anyway, as it's a bug fix.

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.3 bug fix] ECPG does not escape backslashes

2013-07-05 Thread Michael Meskes
On Fri, Jul 05, 2013 at 08:08:06AM -0400, Andrew Dunstan wrote:
 This looks incomplete. Surely just escaping backslashes alone is not
 enough. I suspect at least the  char and any chars below 0x20
 should be quoted also.

Right, this didn't even occur to me, but there are surely more characters that
need to be escaped. Gotta dig into this.

Thanks for pointing this out.

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at gmail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] autoanalyze criteria

2013-07-05 Thread Magnus Hagander
On Wed, May 15, 2013 at 2:33 AM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
 On 24/02/13 10:51, Mark Kirkwood wrote:

 On 24/02/13 10:12, Stefan Andreatta wrote:


 On 02/23/2013 09:30 PM, Jeff Janes wrote:

 Moved discussion from General To Hackers.

 On Sat, Feb 23, 2013 at 10:41 AM, Stefan Andreatta
 s.andrea...@synedra.com mailto:s.andrea...@synedra.com wrote:


 On 02/23/2013 05:10 PM, Jeff Janes wrote:


 Sorry, I got tunnel vision about the how the threshold was
 computed, and forgot about the thing it was compared to.  There
 is a secret data point in the stats collector
 called changes_since_analyze.  This is not exposed in the
 pg_stat_user_tables.  But I think it should be as I often have
 wanted to see it.



 Sounds like a very good idea to me - any way I could help to make
 such a thing happen?



 It should be fairly easy to implement because the other columns are
 already there to show you the way, and if you want to try your hand at
 hacking pgsql it would be a good introduction to doing so.

 Look at each instance in the code of n_dead_dup and
 pg_stat_get_dead_tuples, and those are the places where
 changes_since_analyze also need to be addressed, in an analogous
 manner (assuming it is isn't already there.)

 git grep 'n_dead_tup'

 It looks like we would need to add an SQL function to retrieve the
 data, then incorporate that function into the view definitions that
 make up the pg_stat_user_tables etc. views. and of course update the
 regression test and the documentation.

 Other than implementing it, we would need to convince other hackers
 that this is desirable to have.  I'm not sure how hard that would be.
 I've looked in the archives to see if this idea was already considered
 but rejected, but I don't see any indication that it was previously
 considered.

 (http://www.postgresql.org/message-id/4823.1262132...@sss.pgh.pa.us).

 Cheers,

 Jeff


 Not being a developer, I am afraid, I will not be going to implement it
 myself - nor would anybody wish so ;-)

 I also searched the archives, but the closest I found is a discussion on
 the Admin List starting here:

 http://www.postgresql.org/message-id/626919622.7634700.1351695913466.javamail.r...@alaloop.com

 On the other hand, there is quite a lot of discussion about making
 autoanalyze more (or less) aggressive - which seems a difficult task to
 me, when you cannot even check what's triggering your autoanalyze.

 Anybody else interested?


 I was asked about this exact thing the other day - it would be very nice
 to have the information visible. I may take a look at doing it (I've done
 some hacking on the stats system previously). However don't let that put
 anyone else off - as I'll have to find the time to start :-)




 I happened to be looking at the whole autovacuum/analyze setup in another
 context - which reminded me about volunteering to take a look at a patch for
 adding changes_since_analyze. So with probably impeccably poor timing (smack
 in the middle of 9.3 beta), here is a patch that does that (so it is
 probably an early 9.4 addition).

 I've called the column n_changes_since_analyze - I can sense that there
 might be discussion about how to maybe shorten that :-) , and added a doc
 line for the view + updated the regression test expected input.

Applied, with the changs suggested by Laurenz Albe in his review.

Thanks!


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Display number of changed rows since last analyze

2013-07-05 Thread Magnus Hagander
On Mon, Jul 1, 2013 at 3:15 PM, Albe Laurenz laurenz.a...@wien.gv.at wrote:
 Magnus Hagander wrote:
 On Mon, Jun 17, 2013 at 1:49 PM, Albe Laurenz laurenz.a...@wien.gv.at 
 wrote:
 I think that the column name is ok as it is, even if it
 is a bit long - I cannot come up with a more succinct
 idea.  Perhaps n_changed_since_analyze could be shortened
 to n_mod_since_analyze, but that's not much of an improvement.

 AFAICT it's related to n_live_tup, and n_dead_tup. How about just
 n_mod_tup? Though that doesn't convey that it's since the last
 analyze, I guess.

 But given that both n_dead_tup and n_live_tup don't really indicate
 that they're not since the beginning of stats in the name (which
 other stats counters are), I'm not sure that's a problem? It would be
 a name that sounds more similar to the rest of the table.

 I don't get that.

 As far as I know, n_dead_tup and n_live_tup are estimates for
 the total number of live and dead tuples, period.

 Both numbers are not reset to zero when ANALYZE (or even VACUUM)
 takes place.

 No, but they are zero *until* vacuum runs.

 The point I was trying to make was that they are showing an absolute
 number. Unlike for example n_tup_inserted and friends which show the
 total number of event since stat reset.

 Ok, I understand you now.

 All the old names are fairly intuitive in my opinion.

 Number of life tuples since the statistics were reset doesn't make
 a lot of sense to me, so I would automatically read that as an absolute
 number.

 But it would not be clear to me that n_mod_tuples are counted
 since the last ANALYZE (different from other columns); I would
 jump to the conclusion that it is a sum of n_tup_ins, n_tup_upd
 and n_tup_del.

 So I think that a name that it less likely to cause confusion
 would be better that a short, but misleading name.

Yeah, you're probably right. Applied with your suggested name, and
some further minor tweaking on the wording in the docs.

Thanks!

--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Patch to compute Max LSN of Data Pages

2013-07-05 Thread Hari Babu
On Thursday, July 04, 2013 11:19 PM Robert Haas wrote:

+  fprintf(stderr, _(%s: .. file \%s\ for seeking: %s\n),
+  progname, filename, strerror(errno));

Weird error message style - what's with the ..?

+  fprintf(stderr, _(%s: .. file \%s\ length is more than
segment
size: %d.\n),
+  progname, filename, RELSEG_SIZE);

Again.

Corrected.

+  fprintf(stderr, _(%s: could not seek to next page
\%s\: %s\n),
+  progname, filename,
strerror(errno));

I think this should be written as: could not seek to offset NUMBER in
file PATH

+  fprintf(stderr, _(%s: could not read file \%s\:
%s\n),
+  progname, filename,
strerror(errno));

And this one as: could not read file PATH at offset NUMBER

+  printf(File:%s Maximum LSN found is: %X/%X \nWAL segment
file
name (fileid,seg): %X/%X\n,

I think that we don't need to display the WAL segment file name for
the per-file progress updates.  Instead, let's show the block number
where that LSN was found, like this:

Highest LSN for file %s is %X/%X in block %u.

The overall output can stay as you have it.

Changed as per your suggestion.

+  if (0 != result)

Coding style.

+ static int
+ FindMaxLSNinFile(char *filename, XLogRecPtr *maxlsn)

It seems that this function, and a few others, use -1 for a failure
return, 0 for success, and all others undefined.  Although this is a
defensible choice, I think it would be more PG-like to make the return
value bool and use true for success and false for failure.

+  if (S_ISREG(statbuf.st_mode))
+  (void) FindMaxLSNinFile(pathbuf, maxlsn);
+  else
+  (void) FindMaxLSNinDir(pathbuf, maxlsn);

I don't see why we're throwing away the return value here.  I would
expect the function to have a bool result = true at the top and sent
result = false if one of these functions returns false.  At the end,
it returns result.

Fixed.

+This utility can only be run by the user who installed the server,
because
+it requires read/write access to the data directory.

False.

Removed.

+  LsnSearchPath = argv[optind];
+
+  if (LsnSearchPath == NULL)
+  LsnSearchPath = getenv(PGDATA);

I think you should write the code so that the tool loops over its
input arguments; if none, it processes $PGDATA.  (Don't forget to
update the syntax synopsis and documentation to match.)

Added the functionality of multiple file or directories parsing and printing
Max LSN for each input argument. 

I think the documentation should say something about the intended uses
of this tool, including cautions against using it for things to which
it is not well-suited.  I guess the threshold question for this patch
is whether those uses are enough to justify including the tool in the
core distribution.

Added some use cases and notes regarding the tool. Please suggest if any
More information needs to be documented.

Thanks for the review, please find the updated patch attached in the mail.

Regards,
Hari babu.



pg_computemaxlsn_v8.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fallocate / posix_fallocate for new WAL file creation (etc...)

2013-07-05 Thread Jon Nelson
On Fri, Jul 5, 2013 at 2:23 AM, Greg Smith g...@2ndquadrant.com wrote:
 On 7/5/13 2:50 AM, Jeff Davis wrote:

 So, my simple conclusion is that glibc emulation should be about the
 same as what we're doing now, so there's no reason to avoid it. That
 means, if posix_fallocate() is present, we should use it, because it's
 either the same (if emulated in glibc) or significantly faster (if
 implemented in the kernel).


 That's what I'm seeing everywhere too.  I'm happy that we've spent enough
 time chasing after potential issues without finding anything now.  Pull out
 the GUC that was added for default and this is ready to commit.

Wonderful. Is removing the GUC something that I should do or should
that be done by somebody that knows more about what they are doing? (I
am happy to give it a go!)

Should the small test program that I made also be included somewhere
in the source tree?

--
Jon


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches git tree

2013-07-05 Thread Andres Freund
On 2013-07-05 14:03:56 +0200, Andres Freund wrote:
 On 2013-06-27 21:52:03 -0700, Kevin Grittner wrote:
  Andres Freund and...@2ndquadrant.com wrote:
  
   Hm. There were some issues with the test_logical_decoding
   Makefile not cleaning up the regression installation properly.
   Which might have caused the issue.
  
   Could you try after applying the patches and executing a clean
   and then rebuild?
  
  Tried, and problem persists.
  
   Otherwise, could you try applying my git tree so we are sure we
   test the same thing?
  
   $ git remote add af 
   git://git.postgresql.org/git/users/andresfreund/postgres.git
   $ git fetch af
   $ git checkout -b xlog-decoding af/xlog-decoding-rebasing-cf4
   $ ./configure ...
   $ make
  
  Tried that, too, and problem persists.  The log shows the last
  commit on your branch as 022c2da1873de2fbc93ae524819932719ca41bdb.
 
 Ok. I think I have a slight idea what's going on. Could you check
 whether recompiling with -O0 fixes the issue?
 
 There's something strange going on here, not sure whether it's just a
 bug that's hidden, by either not doing optimizations or by adding more
 elog()s, or wheter it's a compiler bug.

Ok. It was supreme stupidity on my end. Sorry for the time you spent on
it.

Some versions of gcc (and probably other compilers) were removing
sections of code when optimizing because the code was doing undefined
things. Parts of the rdata chain were allocated locally in an
if (needs_key). Which obviously is utterly bogus... A warning would have
been nice though.

Fix pushed and attached.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
From ddbaa1dbf8e0283b41098f5a08a8d21d809b9a63 Mon Sep 17 00:00:00 2001
From: Andres Freund and...@anarazel.de
Date: Fri, 5 Jul 2013 15:07:19 +0200
Subject: [PATCH] wal_decoding: mergme: Don't use out-of-scope local variables
 as part of the rdata chain

Depending on optimization level and other configuration flags removed the
sections of code doing that sinced doing so invokes undefined behaviour making
it legal for the compiler to do so.
---
 src/backend/access/heap/heapam.c | 37 ++---
 1 file changed, 18 insertions(+), 19 deletions(-)

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index f51b73f..f9f1705 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -5987,9 +5987,10 @@ log_heap_update(Relation reln, Buffer oldbuf,
 {
 	xl_heap_update xlrec;
 	xl_heap_header_len xlhdr;
+	xl_heap_header_len xlhdr_idx;
 	uint8		info;
 	XLogRecPtr	recptr;
-	XLogRecData rdata[4];
+	XLogRecData rdata[7];
 	Page		page = BufferGetPage(newbuf);
 
 	/*
@@ -6054,40 +6055,38 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	*/
 	if(need_tuple_data)
 	{
-		XLogRecData rdata_logical[4];
-
-		rdata[3].next = (rdata_logical[0]);
+		rdata[3].next = (rdata[4]);
 
-		rdata_logical[0].data = NULL,
-		rdata_logical[0].len = 0;
-		rdata_logical[0].buffer = newbuf;
-		rdata_logical[0].buffer_std = true;
-		rdata_logical[0].next = NULL;
+		rdata[4].data = NULL,
+		rdata[4].len = 0;
+		rdata[4].buffer = newbuf;
+		rdata[4].buffer_std = true;
+		rdata[4].next = NULL;
 		xlrec.flags |= XLOG_HEAP_CONTAINS_NEW_TUPLE;
 
 		/* candidate key changed and we have a candidate key */
 		if (idx_tuple)
 		{
 			/* don't really need this, but its more comfy */
-			xl_heap_header_len xlhdr_idx;
 			xlhdr_idx.header.t_infomask2 = idx_tuple-t_data-t_infomask2;
 			xlhdr_idx.header.t_infomask = idx_tuple-t_data-t_infomask;
 			xlhdr_idx.header.t_hoff = idx_tuple-t_data-t_hoff;
 			xlhdr_idx.t_len = idx_tuple-t_len;
 
-			rdata_logical[0].next = (rdata_logical[1]);
-			rdata_logical[1].data = (char *) xlhdr_idx;
-			rdata_logical[1].len = SizeOfHeapHeaderLen;
-			rdata_logical[1].buffer = InvalidBuffer;
-			rdata_logical[1].next = (rdata_logical[2]);
+			rdata[4].next = (rdata[5]);
+			rdata[5].data = (char *) xlhdr_idx;
+			rdata[5].len = SizeOfHeapHeaderLen;
+			rdata[5].buffer = InvalidBuffer;
+			rdata[5].next = (rdata[6]);
 
 			/* PG73FORMAT: write bitmap [+ padding] [+ oid] + data */
-			rdata_logical[2].data = (char *) idx_tuple-t_data
+			rdata[6].data = (char *) idx_tuple-t_data
 + offsetof(HeapTupleHeaderData, t_bits);
-			rdata_logical[2].len = idx_tuple-t_len
+			rdata[6].len = idx_tuple-t_len
 - offsetof(HeapTupleHeaderData, t_bits);
-			rdata_logical[2].buffer = InvalidBuffer;
-			rdata_logical[2].next = NULL;
+			rdata[6].buffer = InvalidBuffer;
+			rdata[6].next = NULL;
+
 			xlrec.flags |= XLOG_HEAP_CONTAINS_OLD_KEY;
 		}
 	}
-- 
1.8.2.rc2.4.g7799588.dirty


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches git tree

2013-07-05 Thread Steve Singer

On 07/05/2013 08:03 AM, Andres Freund wrote:

On 2013-06-27 21:52:03 -0700, Kevin Grittner wrote:
Tried that, too, and problem persists.  The log shows the last commit 
on your branch as 022c2da1873de2fbc93ae524819932719ca41bdb. 

Ok. I think I have a slight idea what's going on. Could you check
whether recompiling with -O0 fixes the issue?

There's something strange going on here, not sure whether it's just a
bug that's hidden, by either not doing optimizations or by adding more
elog()s, or wheter it's a compiler bug.


I am getting the same test failure Kevin is seeing.
This is on a x64 Debian wheezy machine with
gcc (Debian 4.7.2-5) 4.7.2

Building with -O0 results in passing tests.




Greetings,

Andres Freund





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches git tree

2013-07-05 Thread Andres Freund
On 2013-07-05 09:28:45 -0400, Steve Singer wrote:
 On 07/05/2013 08:03 AM, Andres Freund wrote:
 On 2013-06-27 21:52:03 -0700, Kevin Grittner wrote:
 Tried that, too, and problem persists.  The log shows the last commit on
 your branch as 022c2da1873de2fbc93ae524819932719ca41bdb.
 Ok. I think I have a slight idea what's going on. Could you check
 whether recompiling with -O0 fixes the issue?
 
 There's something strange going on here, not sure whether it's just a
 bug that's hidden, by either not doing optimizations or by adding more
 elog()s, or wheter it's a compiler bug.
 
 I am getting the same test failure Kevin is seeing.
 This is on a x64 Debian wheezy machine with
 gcc (Debian 4.7.2-5) 4.7.2
 
 Building with -O0 results in passing tests.

Does the patch from
http://archives.postgresql.org/message-id/20130705132513.GB11640%40awork2.anarazel.de
or the git tree (which is rebased ontop of the mvcc catalog commit from
robert which needs some changes) fix it, even with optimizations?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches git tree

2013-07-05 Thread Steve Singer

On 07/05/2013 09:34 AM, Andres Freund wrote:

On 2013-07-05 09:28:45 -0400, Steve Singer wrote:

On 07/05/2013 08:03 AM, Andres Freund wrote:

On 2013-06-27 21:52:03 -0700, Kevin Grittner wrote:

Tried that, too, and problem persists.  The log shows the last commit on
your branch as 022c2da1873de2fbc93ae524819932719ca41bdb.

Ok. I think I have a slight idea what's going on. Could you check
whether recompiling with -O0 fixes the issue?

There's something strange going on here, not sure whether it's just a
bug that's hidden, by either not doing optimizations or by adding more
elog()s, or wheter it's a compiler bug.

I am getting the same test failure Kevin is seeing.
This is on a x64 Debian wheezy machine with
gcc (Debian 4.7.2-5) 4.7.2

Building with -O0 results in passing tests.

Does the patch from
http://archives.postgresql.org/message-id/20130705132513.GB11640%40awork2.anarazel.de
or the git tree (which is rebased ontop of the mvcc catalog commit from
robert which needs some changes) fix it, even with optimizations?



Yes your latest git tree the tests pass with -O2



Greetings,

Andres Freund





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strange IS NULL behaviour

2013-07-05 Thread Peter Eisentraut
On 7/4/13 5:06 PM, Alvaro Herrera wrote:
 FWIW if changing the behavior of NOT NULL constraints is desired, I
 still have the patch to catalogue them around, if anyone wants to play
 around.  I haven't gotten around to finishing it up, yet :-(

If your latest patch isn't publicly available, I'd like to see it.  I
might work on that later this year.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.3 bug fix] ECPG does not escape backslashes

2013-07-05 Thread Tom Lane
Michael Meskes mes...@postgresql.org writes:
 On Wed, Jul 03, 2013 at 07:22:48PM +0900, MauMau wrote:
 not specific to 9.3.  Could you commit and backport this?

 Committed to 8.4, 9.0, 9.1, 9.2 and HEAD.

Um ... 9.3 is a separate branch now, please fix it there also.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Add new GUC, max_worker_processes, limiting number of bgworkers.

2013-07-05 Thread Magnus Hagander
On Fri, Jul 5, 2013 at 6:28 AM, Kevin Hale Boyes kcbo...@gmail.com wrote:
 The change to config.sgml contains a small typo with the double r in the
 xreflabel.

 +  varlistentry id=guc-max-worker-processes
 xreflabel=max_worrker_processes

Fixed, thanks.


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strange IS NULL behaviour

2013-07-05 Thread Bruce Momjian
On Thu, Jul  4, 2013 at 04:29:20PM -0400, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  I developed the attached patch which properly recurses into ROW()
  records checking for NULLs;  you can see it returns the right answer in
  all cases (and constant folds too):
 
 My recollection of the previous discussion is that we didn't have
 consensus on what the right behavior is, so I'm not sure you can just
 assert that this patch is right.  In any case this is only touching the
 tip of the iceberg.  If we intend that rows of nulls should be null,
 then we have got issues with, for example, NOT NULL column constraint
 checks, which don't have any such recursion built into them.  I think
 the same is true for plpgsql variable NOT NULL restrictions, and there
 are probably some other places.

Well we have three cases:

1  SELECT ROW(NULL) IS NULL;
2  SELECT ROW(ROW(NULL)) IS NULL;
3  SELECT ROW(ROW(ROW(NULL))) IS NULL;

I think we could have them all return false, or all true, or the first
one true, and the rest false.  What I don't think we can justify is 1
and 2 as true, and 3 false.

Can someone show how those others behave?  I don't know enough to test
it.

  The optimizer seems like the right place to fix this, per my patch.
 
 No, it isn't, or at least it's far from the only place.  If we're going
 to change this, we would also want to change the behavior of tests on
 RECORD values, which is something that would have to happen at runtime.

I checked RECORD and that behaves with recursion:

SELECT RECORD(NULL) IS NULL;
 ?column?
--
 t

SELECT RECORD(RECORD(NULL)) IS NULL;
 ?column?
--
 t

SELECT RECORD(RECORD(RECORD(NULL))) IS NULL;
 ?column?
--
 t

Am I missing something?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: UNNEST (and other functions) WITH ORDINALITY

2013-07-05 Thread Vik Fearing
On 07/04/2013 10:15 AM, Dean Rasheed wrote:
 On 4 July 2013 00:08, David Fetter da...@fetter.org wrote:
 Patch re-jiggered for recent changes to master.

 I re-validated this, and it all still looks good, so still ready for
 committer IMO.

I tried to check this out, too and make check fails with the
following.  I have not looked into the cause.

$ cat /home/vik/postgresql/git/src/test/regress/log/initdb.log
Running in noclean mode.  Mistakes will not be cleaned up.
The files belonging to this database system will be owned by user vik.
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:en_US.UTF-8
  MESSAGES: C
  MONETARY: en_US.UTF-8
  NUMERIC:  en_US.UTF-8
  TIME: en_US.UTF-8
The default database encoding has accordingly been set to UTF8.
The default text search configuration will be set to english.

Data page checksums are disabled.

creating directory
/home/vik/postgresql/git/src/test/regress/./tmp_check/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
creating configuration files ... ok
creating template1 database in
/home/vik/postgresql/git/src/test/regress/./tmp_check/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... FATAL:  role with OID 256 does not exist
STATEMENT:  DELETE FROM pg_depend;
   
child process exited with exit code 1
initdb: data directory
/home/vik/postgresql/git/src/test/regress/./tmp_check/data not removed
at user's request



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strange IS NULL behaviour

2013-07-05 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 On Thu, Jul  4, 2013 at 04:29:20PM -0400, Tom Lane wrote:
 No, it isn't, or at least it's far from the only place.  If we're going
 to change this, we would also want to change the behavior of tests on
 RECORD values, which is something that would have to happen at runtime.

 I checked RECORD and that behaves with recursion:

Apparently you don't even understand the problem.  All of these examples
you're showing are constants.  Try something like

declare r record;
...
select ... into r ...
if (r is null) ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: UNNEST (and other functions) WITH ORDINALITY

2013-07-05 Thread David Fetter
On Fri, Jul 05, 2013 at 04:58:30PM +0200, Vik Fearing wrote:
 On 07/05/2013 04:51 PM, Vik Fearing wrote:
  I tried to check this out, too and make check fails with the
  following.  I have not looked into the cause.
 
 Running ./configure again fixed it.  Sorry for the noise.

If I had a nickel for every apparent failure of this nature, I'd never
need to work again.

Thanks for checking :)

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: UNNEST (and other functions) WITH ORDINALITY

2013-07-05 Thread Vik Fearing
On 07/05/2013 04:51 PM, Vik Fearing wrote:
 I tried to check this out, too and make check fails with the
 following.  I have not looked into the cause.

Running ./configure again fixed it.  Sorry for the noise.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches git tree

2013-07-05 Thread Steve Singer

On 06/14/2013 06:51 PM, Andres Freund wrote:

The git tree is at:
git://git.postgresql.org/git/users/andresfreund/postgres.git branch 
xlog-decoding-rebasing-cf4
http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=shortlog;h=refs/heads/xlog-decoding-rebasing-cf4



We discussed issues related to passing options to the plugins a number 
of months ago ( 
http://www.postgresql.org/message-id/20130129015732.ga24...@awork2.anarazel.de)


I'm still having issues with the syntax you describe there.


START_LOGICAL_REPLICATION 1 0/0 (foo,bar)
 unexpected termination of replication stream: ERROR:  foo requires a 
parameter


START_LOGICAL_REPLICATION 1 0/0 (foo bar)
START_LOGICAL_REPLICATION 1 0/0 (foo bar): ERROR:  syntax error

START_LOGICAL_REPLICATION 1 0/0 (foo)
works okay



Steve





On 2013-06-15 00:48:17 +0200, Andres Freund wrote:

Overview of the attached patches:
0001: indirect toast tuples; required but submitted independently
0002: functions for testing; not required,
0003: (tablespace, filenode) syscache; required
0004: RelationMapFilenodeToOid: required, simple
0005: pg_relation_by_filenode() function; not required but useful
0006: Introduce InvalidCommandId: required, simple
0007: Adjust Satisfies* interface: required, mechanical,
0008: Allow walsender to attach to a database: required, needs review
0009: New GetOldestXmin() parameter; required, pretty boring
0010: Log xl_running_xact regularly in the bgwriter: required
0011: make fsync_fname() public; required, needs to be in a different file
0012: Relcache support for an Relation's primary key: required
0013: Actual changeset extraction; required
0014: Output plugin demo; not required (except for testing) but useful
0015: Add pg_receivellog program: not required but useful
0016: Add test_logical_decoding extension; not required, but contains
   the tests for the feature. Uses 0014
0017: Snapshot building docs; not required

Version v5-01 attached

Greetings,

Andres Freund







--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strange IS NULL behaviour

2013-07-05 Thread Bruce Momjian
On Fri, Jul  5, 2013 at 11:03:56AM -0400, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  On Thu, Jul  4, 2013 at 04:29:20PM -0400, Tom Lane wrote:
  No, it isn't, or at least it's far from the only place.  If we're going
  to change this, we would also want to change the behavior of tests on
  RECORD values, which is something that would have to happen at runtime.
 
  I checked RECORD and that behaves with recursion:
 
 Apparently you don't even understand the problem.  All of these examples
 you're showing are constants.  Try something like
 
   declare r record;
   ...
   select ... into r ...
   if (r is null) ...

Not aparently --- I already said I didn't understand the problem. 
Should I just mark this as a TODO?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches git tree

2013-07-05 Thread Andres Freund
On 2013-07-05 11:33:20 -0400, Steve Singer wrote:
 On 06/14/2013 06:51 PM, Andres Freund wrote:
 The git tree is at:
 git://git.postgresql.org/git/users/andresfreund/postgres.git branch 
 xlog-decoding-rebasing-cf4
 http://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=shortlog;h=refs/heads/xlog-decoding-rebasing-cf4
 
 
 We discussed issues related to passing options to the plugins a number of
 months ago ( 
 http://www.postgresql.org/message-id/20130129015732.ga24...@awork2.anarazel.de)
 
 I'm still having issues with the syntax you describe there.
 
 
 START_LOGICAL_REPLICATION 1 0/0 (foo,bar)
  unexpected termination of replication stream: ERROR:  foo requires a
 parameter

I'd guess that's coming from your output plugin? You're using
defGetString() on DefElem without a value?

 START_LOGICAL_REPLICATION 1 0/0 (foo bar)

Yes, the option *names* are identifiers, together with plugin  slot
names. The passed values need to be SCONSTs atm
(src/backend/replication/repl_gram.y):

plugin_opt_elem:
IDENT plugin_opt_arg
{
$$ = makeDefElem($1, $2);
}
;

plugin_opt_arg:
SCONST  
{ $$ = (Node *) makeString($1); }
| /* EMPTY */   { $$ = 
NULL; }
;

So, it would have to be:
START_LOGICAL_REPLICATION 1 0/0 (foo 'bar blub frob', sup 'star', noarg)

Now that's not completely obvious, I admit :/. Better suggestions?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add transforms feature

2013-07-05 Thread Peter Eisentraut
On 7/3/13 7:15 PM, Josh Berkus wrote:
 I'm not comfortable with having all of the transform mappings in the
 main contrib/ directory though.  Can we add a subdirectory called
 transforms containing all of these?

I don't see any value in that.  The data types they apply to are in
contrib after all.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [BUGS] BUG #7873: pg_restore --clean tries to drop tables that don't exist

2013-07-05 Thread Pavel Stehule
Hello

I am sending a patch that removes strict requirements for DROP IF
EXISTS statements. This behave is similar to our ALTER IF EXISTS
behave now.


postgres=# DROP CAST IF EXISTS (sss AS public.casttesttype);
NOTICE:  types sss and public.casttesttype does not exist, skipping
DROP CAST
postgres=#   DROP FUNCTION IF EXISTS public.pt_in_widget(point, widget);
NOTICE:  function public.pt_in_widget(point,widget) does not exist, skipping
DROP FUNCTION
postgres=#  DROP OPERATOR IF EXISTS public.% (point, widget);
NOTICE:  operator public.% does not exist, skipping
DROP OPERATOR
postgres=# DROP TRIGGER test_trigger_exists ON no_such_table;
ERROR:  relation no_such_table does not exist
postgres=# DROP TRIGGER IF EXISTS test_trigger_exists ON no_such_table;
NOTICE:  trigger test_trigger_exists for table no_such_table does
not exist, skipping
DROP TRIGGER

This functionality is necessary for correct quite reload from dump
without possible warnings

Regards

Pavel


2013/7/2 Pavel Stehule pavel.steh...@gmail.com:
 Hello

 remastered patch

 still there is a issue with dependencies

 Regards

 Pavel Stehule




 2013/6/17 Josh Kupershmidt schmi...@gmail.com:
 On Fri, Mar 8, 2013 at 11:58 AM, Pavel Stehule pavel.steh...@gmail.com 
 wrote:

 I'll see - please, stay tuned to 9.4 first commitfest

 Hi Pavel,
 Just a reminder, I didn't see this patch in the current commitfest. I
 would be happy to spend some more time reviewing if you wish to pursue
 the patch.

 Josh


drop-if-exists-no-double-check.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add transforms feature

2013-07-05 Thread Hitoshi Harada
On Friday, July 5, 2013, Peter Eisentraut wrote:

 On 7/3/13 7:15 PM, Josh Berkus wrote:
  I'm not comfortable with having all of the transform mappings in the
  main contrib/ directory though.  Can we add a subdirectory called
  transforms containing all of these?

 I don't see any value in that.  The data types they apply to are in
 contrib after all.


I guess his suggestion is contrib/transforms directory, not transforms
directory at top level.  Stil you don't see value?




-- 
Hitoshi Harada


Re: [HACKERS] MVCC catalog access

2013-07-05 Thread Andres Freund
Hi Robert,

On 2013-07-02 09:31:23 -0400, Robert Haas wrote:
 I have a few ideas for getting rid of the remaining uses of
 SnapshotNow that I'd like to throw out there:

Is your current plan to get rid of SnapshotNow entirely? I am wonder
because the changeset extraction needs to care and how the proper fix
for dealing with CatalogSnapshotData looks depends on it...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strange IS NULL behaviour

2013-07-05 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 Should I just mark this as a TODO?

I thought it was on the list already.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Have REFRESH MATERIALIZED VIEW run as the MV owner

2013-07-05 Thread Noah Misch
REFRESH MATERIALIZED VIEW should temporarily switch the current user ID to the
MV owner.  REINDEX and VACUUM do so to let privileged users safely maintain
objects owned by others, and REFRESH MATERIALIZED VIEW belongs in that class
of commands.  The MV query then runs as a security-restricted operation,
which forbids a few commands.  Most, e.g. UNLISTEN, are unlikely to arise in
practice.  The most interesting restriction is probably CREATE TEMP TABLE.
Consider a function that creates and later drops a temporary table that it
uses for intermediate storage during a complicated calculation.  That function
will no longer work in a MV query.  As a workaround, modify the function to
use a permanent table as its work area.

See attached patch.  The similar behavior of REINDEX et al. is undocumented.
Users are a bit more likely to notice limitations in the context of MVs, so I
added a brief documentation mention.  Seeing that this narrows the range of
valid MV queries, I bring it up now so MVs can debut with the restrictions
already in place.

Thanks,
nm

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com
diff --git a/doc/src/sgml/ref/create_materialized_view.sgml 
b/doc/src/sgml/ref/create_materialized_view.sgml
index 0ed764b..b742e17 100644
--- a/doc/src/sgml/ref/create_materialized_view.sgml
+++ b/doc/src/sgml/ref/create_materialized_view.sgml
@@ -105,7 +105,9 @@ CREATE MATERIALIZED VIEW 
replaceabletable_name/replaceable
 listitem
  para
   A xref linkend=sql-select, link linkend=sql-tableTABLE/link,
-  or xref linkend=sql-values command.
+  or xref linkend=sql-values command.  This query will run within a
+  security-restricted operation; in particular, calls to functions that
+  themselves create temporary tables will fail.
  /para
 /listitem
/varlistentry
diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
index 2bfe5fb..a3509d8 100644
--- a/src/backend/commands/createas.c
+++ b/src/backend/commands/createas.c
@@ -33,6 +33,7 @@
 #include commands/prepare.h
 #include commands/tablecmds.h
 #include commands/view.h
+#include miscadmin.h
 #include parser/parse_clause.h
 #include rewrite/rewriteHandler.h
 #include storage/smgr.h
@@ -69,7 +70,11 @@ ExecCreateTableAs(CreateTableAsStmt *stmt, const char 
*queryString,
 {
Query  *query = (Query *) stmt-query;
IntoClause *into = stmt-into;
+   boolis_matview = (into-viewQuery != NULL);
DestReceiver *dest;
+   Oid save_userid = InvalidOid;
+   int save_sec_context = 0;
+   int save_nestlevel = 0;
List   *rewritten;
PlannedStmt *plan;
QueryDesc  *queryDesc;
@@ -90,6 +95,7 @@ ExecCreateTableAs(CreateTableAsStmt *stmt, const char 
*queryString,
{
ExecuteStmt *estmt = (ExecuteStmt *) query-utilityStmt;
 
+   Assert(!is_matview);/* excluded by syntax */
ExecuteQuery(estmt, into, queryString, params, dest, 
completionTag);
 
return;
@@ -97,6 +103,21 @@ ExecCreateTableAs(CreateTableAsStmt *stmt, const char 
*queryString,
Assert(query-commandType == CMD_SELECT);
 
/*
+* For materialized views, lock down security-restricted operations and
+* arrange to make GUC variable changes local to this command.  This is
+* not necessary for security, but this keeps the behavior similar to
+* REFRESH MATERIALIZED VIEW.  Otherwise, one could create a 
materialized
+* view not possible to refresh.
+*/
+   if (is_matview)
+   {
+   GetUserIdAndSecContext(save_userid, save_sec_context);
+   SetUserIdAndSecContext(save_userid,
+  save_sec_context | 
SECURITY_RESTRICTED_OPERATION);
+   save_nestlevel = NewGUCNestLevel();
+   }
+
+   /*
 * Parse analysis was done already, but we still have to run the rule
 * rewriter.  We do not do AcquireRewriteLocks: we assume the query 
either
 * came straight from the parser, or suitable locks were acquired by
@@ -160,6 +181,15 @@ ExecCreateTableAs(CreateTableAsStmt *stmt, const char 
*queryString,
FreeQueryDesc(queryDesc);
 
PopActiveSnapshot();
+
+   if (is_matview)
+   {
+   /* Roll back any GUC changes */
+   AtEOXact_GUC(false, save_nestlevel);
+
+   /* Restore userid and security context */
+   SetUserIdAndSecContext(save_userid, save_sec_context);
+   }
 }
 
 /*
diff --git a/src/backend/commands/matview.c b/src/backend/commands/matview.c
index 2ffdca3..1c383ba 100644
--- a/src/backend/commands/matview.c
+++ b/src/backend/commands/matview.c
@@ -122,6 +122,9 @@ ExecRefreshMatView(RefreshMatViewStmt *stmt, const char 
*queryString,
RewriteRule *rule;
List   

Re: [HACKERS] strange IS NULL behaviour

2013-07-05 Thread Bruce Momjian
On Fri, Jul  5, 2013 at 12:43:57PM -0400, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  Should I just mark this as a TODO?
 
 I thought it was on the list already.

We only have:

Improve handling of NULLs in arrays 

and some pl/pgsql items regarding nulls.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: FILTER for aggregates [was Re: [HACKERS] Department of Redundancy Department: makeNode(FuncCall) division]

2013-07-05 Thread David Fetter
On Mon, Jul 01, 2013 at 05:30:38PM +0100, Dean Rasheed wrote:
 On 1 July 2013 01:44, David Fetter da...@fetter.org wrote:
  On Fri, Jun 28, 2013 at 09:22:52PM +0100, Dean Rasheed wrote:
  On 21 June 2013 06:16, David Fetter da...@fetter.org wrote:
   Please find attached a patch which allows subqueries in the FILTER
   clause and adds regression testing for same.
  
 
  This needs re-basing/merging following Robert's recent commit to make
  OVER unreserved.
 
  Please find attached.  Thanks, Andrew Gierth!  In this one, FILTER is
  no longer a reserved word.
 
 
 Looking at this patch again, it appears to be in pretty good shape.
 
 - Applies cleanly to head.
 - Compiles with no warnings.
 - Includes regression test cases and doc updates.
 - Compatible with the relevant part of T612, Advanced OLAP operations.
 - Includes pg_dump support.
 - Code changes all look reasonable, and I can't find any corner cases
 that have been missed.
 - Appears to work as expected. I tested everything I could think of
 and couldn't break it.
 
 AFAICT all the bases have been covered. As mentioned upthread, I would
 have preferred a few more regression test cases, and a couple of the
 tests don't appear to return anything interesting, but I'll leave that
 for the committer to decide whether they're sufficient for regression
 tests.
 
 I have a few suggestions to improve the docs:
 
 1). In syntax.sgml: The aggregate_name can also be suffixed with
 FILTER as described below. It's not really a suffix to the aggregate
 name, since it follows the function arguments and optional order by
 clause. Perhaps it would be more consistent with the surrounding text
 to say something like
 
 replaceableexpression/replaceable is
 any value expression that does not itself contain an aggregate
 expression or a window function call, and
 !replaceableorder_by_clause/replaceable and
 !replaceablefilter_clause/replaceable are optional
 !literalORDER BY/ and literalFILTER/ clauses as described below.
 
 2). In syntax.sgml: ... or when a FILTER clause is present, each row
 matching same. In the context of that paragraph this suggests that
 the filter clause only applies to the first form, since that paragraph
 is a description of the 4 forms of the aggregate function. I don't
 think it's worth mentioning FILTER in this paragraph at all --- it's
 adequately described below that.
 
 3). In syntax.sgml: Adding a FILTER clause to an aggregate specifies
 which values of the expression being aggregated to evaluate. How
 about something a little more specific, along the lines of
 
 If literalFILTER/ is specified, then only input rows for which
 the replaceablefilter_clause/replaceable evaluates to true are
 fed to the aggregate function; input rows for which the
 replaceablefilter_clause/replaceable evaluates to false or the
 null value are discarded.  For example...
 
 4). In select.sgml: In the absence of a FILTER clause, aggregate
 functions It doesn't seem right to refer to the FILTER clause at
 the top level here because it's not part of the SELECT syntax being
 described on this page. Also I think this should include a
 cross-reference to the aggregate function syntax section, perhaps
 something like:
 
 Aggregate functions, if any are used, are computed across all rows
 making up each group, producing a separate value for each group
 (whereas without literalGROUP BY/literal, an aggregate
 produces a single value computed across all the selected rows).
 +The set of rows fed to the aggregate function can be further filtered
 +by attaching a literalFILTER/literal clause to the aggregate
 +function call, see xref ... for more information.
 When literalGROUP BY/literal is present, it is not valid for
 the commandSELECT/command list expressions to refer to
 ungrouped columns except within aggregate functions or if the
 ungrouped column is functionally dependent on the grouped columns,
 since there would otherwise be more than one possible value to
 return for an ungrouped column.  A functional dependency exists if
 the grouped columns (or a subset thereof) are the primary key of
 the table containing the ungrouped column.

Please find attached changes based on the above.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
diff --git a/doc/src/sgml/keywords.sgml b/doc/src/sgml/keywords.sgml
index 5e3b33a..ecfde99 100644
--- a/doc/src/sgml/keywords.sgml
+++ b/doc/src/sgml/keywords.sgml
@@ -1786,7 +1786,7 @@
/row
row
 entrytokenFILTER/token/entry
-entry/entry
+entrynon-reserved/entry
 entryreserved/entry
 entryreserved/entry
 entry/entry
@@ 

[HACKERS] Millisecond-precision connect_timeout for libpq

2013-07-05 Thread ivan babrou
Hi, guys! I made a quick patch to support floating number in
connect_timeout param for libpq. This will treat floating number as
seconds so this is backwards-compatible. I don't usually write in C,
so there may be mistakes. Could you review it and give me some
feedback?

--
Regards, Ian Babrou
http://bobrik.name http://twitter.com/ibobrik skype:i.babrou


connect_timeout_in_ms.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSOC13 proposal - extend RETURNING syntax

2013-07-05 Thread David Fetter
On Fri, Jul 05, 2013 at 03:22:33AM +0200, Karol Trzcionka wrote:
 Hello,
 according to my mentor's suggestion, I send first PoC patch of
 RETURNING AFTER/BEFORE statement. Some info:
 - it is early version - more hack PoC than RC patch
 - AFTER in this version works as decribed before but it returns row
 after update but before post-update before (to be fixed or switch with
 current * behaviour - I don't know yet)
 - patch passes all regression tests
 - the code is uncommented already, to be fixed
 I'm not sure what may fail so you use it on your risk ;)
 Regards,
 Karol

Karol,

Per discussion in IRC, please follow up with your patch that includes
such documentation, new regression tests, etc., you've written for the
feature.

Thanks! :)

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Millisecond-precision connect_timeout for libpq

2013-07-05 Thread Tom Lane
ivan babrou ibob...@gmail.com writes:
 Hi, guys! I made a quick patch to support floating number in
 connect_timeout param for libpq.

What exactly is the use case for that?  It seems like extra complication
for something with little if any real-world usefulness.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fallocate / posix_fallocate for new WAL file creation (etc...)

2013-07-05 Thread Jeff Davis
On Fri, 2013-07-05 at 08:21 -0500, Jon Nelson wrote:
 Wonderful. Is removing the GUC something that I should do or should
 that be done by somebody that knows more about what they are doing? (I
 am happy to give it a go!)

I'll take care of that.

 Should the small test program that I made also be included somewhere
 in the source tree?

I don't think that's necessary, the fact that it's in the mailing list
archives is enough.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal - Support for National Characters functionality

2013-07-05 Thread Peter Eisentraut
On 7/4/13 10:11 PM, Arulappan, Arul Shaji wrote:
 The main aim at the moment is to get some feedback on the above to know
 if this feature is something that would benefit PostgreSQL in general,
 and if users maintaining DBs in non-English speaking regions will find
 this beneficial.

For European languages, I think everyone has moved to using Unicode, so
the demand for supporting multiple encodings is approaching zero.  The
CJK realm might have difference requirements.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal - Support for National Characters functionality

2013-07-05 Thread Pavel Stehule
Yes, what I know almost all use utf8 without problems. Long time I didn't
see any request for multi encoding support.
Dne 5.7.2013 20:28 Peter Eisentraut pete...@gmx.net napsal(a):

 On 7/4/13 10:11 PM, Arulappan, Arul Shaji wrote:
  The main aim at the moment is to get some feedback on the above to know
  if this feature is something that would benefit PostgreSQL in general,
  and if users maintaining DBs in non-English speaking regions will find
  this beneficial.

 For European languages, I think everyone has moved to using Unicode, so
 the demand for supporting multiple encodings is approaching zero.  The
 CJK realm might have difference requirements.



 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



[HACKERS] Preventing tuple-table leakage in plpgsql

2013-07-05 Thread Tom Lane
Bug #8279 exhibits an intra-transaction memory leak in a plpgsql
function that repeatedly traps an error.  The cause of the leak is that
the SPITupleTable created during exec_stmt_execsql is never cleaned up.
(It will get cleaned up at function exit, but that's not soon enough in
this usage.)

The submitter proposes fixing this by inserting some more
SPI_freetuptable calls in exec_stmt_execsql, but that's not much of a
fix.  The patch only covers the two ereports associated with strict
mode (and not, say, any errors thrown in functions called by
exec_stmt_execsql).  Nor does it do anything for similar leakage
scenarios elsewhere.  I count at least four other functions with the
same kind of oversight in plpgsql, and there are suspicious-looking
usages elsewhere as well.

One approach we could take is to insert PG_TRY blocks to ensure that
SPI_freetuptable is called on error exit from such functions.  (plpython
seems to have adopted this solution already.)  However, that tends to be
pretty messy notationally, and possibly could represent a noticeable
performance hit depending on what you assume about the speed of
sigsetjmp().  Moreover, fixing known trouble spots this way does nothing
to guard against similar errors elsewhere.

So I'm inclined to propose that SPI itself should offer some mechanism
for cleaning up tuple tables at subtransaction abort.  We could just
have it automatically throw away tuple tables made in the current
subtransaction, or we could allow callers to exercise some control,
perhaps by calling a function that says don't reclaim this tuple table
automatically.  I'm not sure if there's any real use-case for such a
call though.

It's also not very clear to me if tuple tables should be thrown away
automatically at subtransaction commit.  We could do that, or leave
things alone, or add some logic to emit warning bleats about unreleased
tuple tables (comparable to what is done for many other resource types).
If we change anything about the commit case, I think we run significant
risk of breaking third-party code that works now, so maybe it's best
to leave that alone.

It might also be worth debating whether to back-patch such a fix.
This issue has been there ever since plpgsql grew the ability to trap
errors, so the lack of previous reports might mean that it's not worth
taking risks to fix such leaks in back branches.

Comments, better ideas?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Changing recovery.conf parameters into GUCs

2013-07-05 Thread Josh Berkus
Robert, Simon, All,

On 04/01/2013 04:51 AM, Robert Haas wrote: On Thu, Mar 28, 2013 at
11:48 AM, Simon Riggs si...@2ndquadrant.com wrote:
 a)  recovery parameters are made into GUCs (for which we have a patch
 from Fujii)
 b)  all processes automatically read recovery.conf as the last step in
 reading configuration files, if it exists, even if data_directory
 parameter is in use (attached patch)
 c)  we trigger archive recovery by the presence of either
 recovery.conf or recovery.trigger in the data directory. At the end,
 we rename to recovery.done just as we do now. This means that any
 parameters put into recovery.conf will not be re-read when we SIGHUP
 after end of recovery. Note that recovery.trigger will NOT be read for
 parameters and is assumed to be zero-length.
 (minor patch required)
 
 I still prefer Greg Smith's proposal.
 
 http://www.postgresql.org/message-id/4ee91248.8010...@2ndquadrant.com

So, this seems to still be stalled.  Is there a way forward on this
which won't cause us to wait *another* version before we have working
replication GUCs?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add transforms feature

2013-07-05 Thread Josh Berkus
On 07/05/2013 09:08 AM, Peter Eisentraut wrote:
 On 7/3/13 7:15 PM, Josh Berkus wrote:
 I'm not comfortable with having all of the transform mappings in the
 main contrib/ directory though.  Can we add a subdirectory called
 transforms containing all of these?
 
 I don't see any value in that.  The data types they apply to are in
 contrib after all.

Well, I had three thoughts on this:

(a) transforms aren't like other contribs, in that they are dependant on
other contribs before you install them.

(b) we can expect maybe a dozen to 18 of them in core based on the data
types there, and I hate to clutter up /contrib, and

(c) I'd like to do a future feature which supports install all
transforms functionality, which would be helped by having them in their
own directory.

That's my thinking on it anyway.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSOC13 proposal - extend RETURNING syntax

2013-07-05 Thread Karol Trzcionka
Updated patch:
- include sgml
- fix all compiler warnings
- some cleanup
- fix correctness of feature
Regards,
Karol
diff --git a/doc/src/sgml/ref/update.sgml b/doc/src/sgml/ref/update.sgml
index 90b9208..eba35f0 100644
--- a/doc/src/sgml/ref/update.sgml
+++ b/doc/src/sgml/ref/update.sgml
@@ -194,12 +194,27 @@ UPDATE [ ONLY ] replaceable class=PARAMETERtable_name/replaceable [ * ] [
 termreplaceable class=PARAMETERoutput_expression/replaceable/term
 listitem
  para
-  An expression to be computed and returned by the commandUPDATE/
-  command after each row is updated.  The expression can use any
-  column names of the table named by replaceable class=PARAMETERtable_name/replaceable
-  or table(s) listed in literalFROM/.
-  Write literal*/ to return all columns.
+  An expression to be computed and returned by the
+  commandUPDATE/ command either before or after (prefixed with
+  literalBEFORE./literal and literalAFTER./literal,
+  respectively) each row is updated.  The expression can use any
+  column names of the table named by replaceable
+  class=PARAMETERtable_name/replaceable or table(s) listed in
+  literalFROM/.  Write literalAFTER.*/literal to return all 
+  columns after the update. Write literalBEFORE.*/literal for all
+  columns before the update. Write literal*/literal to return all
+  columns after update and all triggers fired (these values are in table
+  after command). You may combine BEFORE, AFTER and raw columns in the
+  expression.
  /para
+ warningpara
+ Mixing table names or aliases named before or after with the
+ above will result in confusion and suffering.  If you happen to
+ have a table called literalbefore/literal or
+ literalafter/literal, alias it to something else when using
+ RETURNING.
+ /para/warning
+
 /listitem
/varlistentry
 
@@ -287,15 +302,16 @@ UPDATE weather SET temp_lo = temp_lo+1, temp_hi = temp_lo+15, prcp = DEFAULT
   /para
 
   para
-   Perform the same operation and return the updated entries:
+   Perform the same operation and return information on the changed entries:
 
 programlisting
 UPDATE weather SET temp_lo = temp_lo+1, temp_hi = temp_lo+15, prcp = DEFAULT
   WHERE city = 'San Francisco' AND date = '2003-07-03'
-  RETURNING temp_lo, temp_hi, prcp;
+  RETURNING temp_lo AS new_low, temp_hi AS new_high, BEFORE.temp_hi/BEFORE.temp_low AS old_ratio, AFTER.temp_hi/AFTER.temp_low AS new_ratio prcp;
 /programlisting
   /para
 
+
   para
Use the alternative column-list syntax to do the same update:
 programlisting
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 42d6621..cc68568 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -1920,6 +1920,7 @@ range_table_walker(List *rtable,
 		{
 			case RTE_RELATION:
 			case RTE_CTE:
+			case RTE_BEFORE:
 /* nothing to do */
 break;
 			case RTE_SUBQUERY:
@@ -2622,6 +2623,7 @@ range_table_mutator(List *rtable,
 		{
 			case RTE_RELATION:
 			case RTE_CTE:
+			case RTE_BEFORE:
 /* we don't bother to copy eref, aliases, etc; OK? */
 break;
 			case RTE_SUBQUERY:
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index b2183f4..2698535 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -2351,6 +2351,7 @@ _outRangeTblEntry(StringInfo str, const RangeTblEntry *node)
 	switch (node-rtekind)
 	{
 		case RTE_RELATION:
+		case RTE_BEFORE:
 			WRITE_OID_FIELD(relid);
 			WRITE_CHAR_FIELD(relkind);
 			break;
diff --git a/src/backend/nodes/readfuncs.c b/src/backend/nodes/readfuncs.c
index 3a16e9d..04931ee 100644
--- a/src/backend/nodes/readfuncs.c
+++ b/src/backend/nodes/readfuncs.c
@@ -1189,6 +1189,7 @@ _readRangeTblEntry(void)
 	switch (local_node-rtekind)
 	{
 		case RTE_RELATION:
+		case RTE_BEFORE:
 			READ_OID_FIELD(relid);
 			READ_CHAR_FIELD(relkind);
 			break;
diff --git a/src/backend/optimizer/plan/initsplan.c b/src/backend/optimizer/plan/initsplan.c
index 839ed9d..7fc6ea1 100644
--- a/src/backend/optimizer/plan/initsplan.c
+++ b/src/backend/optimizer/plan/initsplan.c
@@ -174,9 +174,16 @@ add_vars_to_targetlist(PlannerInfo *root, List *vars,
 		if (IsA(node, Var))
 		{
 			Var		   *var = (Var *) node;
-			RelOptInfo *rel = find_base_rel(root, var-varno);
+			RelOptInfo *rel;
 			int			attno = var-varattno;
+			RangeTblEntry *rte;
 
+			if (root-parse-commandType == CMD_UPDATE){
+rte = ((RangeTblEntry *) list_nth(root-parse-rtable, (var-varno)-1));
+if(rte-rtekind == RTE_BEFORE)
+	continue;
+			}
+			rel = find_base_rel(root, var-varno);
 			Assert(attno = rel-min_attr  attno = rel-max_attr);
 			attno -= rel-min_attr;
 			if (rel-attr_needed[attno] == NULL)
diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index d80c264..77b0c38 100644
--- a/src/backend/optimizer/plan/planner.c
+++ 

[HACKERS] Review: extension template

2013-07-05 Thread Markus Wanner
Hi,

I reviewed Dimitri's work on extension templates [2]. There's some
discussion still ongoing and the patch has gone through several
revisions since its addition to the current CF. The patch has already
been marked as 'returned with feedback', and I can support that
resolution (for this CF). I still see value in the following review.

Initially, I was confused about what the patch is supposed to achieve.
The 'template' naming certainly contributed to that confusion. My mental
model of a template is something that I can throw away after its use. (I
note the text search templates don't follow that model, either). Where
as up to now, extensions were something that I install (system wide) and
then enable per database (by CREATE, which can be thought of as a
misnomer as well).

Of course you can think of such a system wide installation as a template
for the creation (an instantiation per database). However, we didn't
ever call the system wide installations templates. Nor does the patch
start to adapt that naming scheme.

The major distinguishing factor is not the 'template' character of
extensions installed that way, but the storage place of the its
control data: filesystem vs system catalog. I'd either recommend
appropriate renaming to reflect that fact and to avoid confusing users;
or follow the template model better and decouple the extension from
its template - with implications on extensions requiring additional
binary code. Thinking of it, I kind of like that approach...

Dimitri responded promptly to a request to rebase the patch. Version 8
still applies cleanly to git master (as of Jul 5, 9ce9d). The patch
matches git revision c0c507022ec912854e6658c5a10a3dedb1c36d67 of dim's
github branch 'tmpl4' [2]. That's what I tested with.

The base of the patched tree builds just fine (i.e. plain 'make'),
although the compiler rightfully warns about an 'evi' variable being set
but not used. extension.c, line 1170. Jaime mentioned that already.

Compiling pg_upgrade_support in contrib fails:

 $SRC/contrib/pg_upgrade_support/pg_upgrade_support.c:193:8:
   error: too few arguments to function ‘InsertExtensionTuple’

The build passes 'make check'. Additional tests are provided. Good.

As originally mentioned by Heikki, if the tplscript doesn't parse, the
error message is just syntax error at or near. That matches the
behavior of extensions installed on the file system. However, given this
adds a second way, a hint about where this error actually comes from is
mandatory, IMO.

Trying to re-create a pre-existing template properly throws 'template
for extension $NAME version $VERSION already exists'. However, if
the extension is already enabled for that database, it instead says:
extension $NAME already exists. I can see why that's fine if you
assume a strong binding between the instantiation and the template.

However, it's possible to enable an extension and then rename its
template. The binding (as in pg_depend) is still there, but the above
error (in that case extension $OLD_NAME already existing) certainly
doesn't make sense. One can argue whether or not an extension with a
different name is still the same extension at all...

Trying to alter an inexistent or file-system stored extension doesn't
throw an error, but silently succeeds. Especially in the latter case,
that's utterly misleading. Please fix.

That started to get me worried about the effects of a mixed
installation, but I quickly figured it's not possible to have a full
version on disk and then add an incremental upgrade via the system
catalogs. I think that's a fair limitation, as mixed installations would
pose their own set of issues. On the other hand, isn't ease of upgrades
a selling point for this feature?

In any case, the error message could again be more specific:

  (having extension 'pex' version '0.9' on the file-system)

  # CREATE TEMPLATE FOR EXTENSION pex VERSION '1.0' ...
  ERROR: extension pex already available

  [ This one could mention it exists on the file-system. ]

  # CREATE TEMPLATE FOR EXTENISON pex FROM '1.9' TO '1.10' AS ...
  ERROR: no template for extension pex

This last error is technically correct only if you consider file system
extensions to not be templates. In any case, there is *something*
relevant called pex on the file-system, that prevents creation of the
template in the system catalogs. The error message should definitely be
more specific.

With delight I note that renaming to an extension name that pre-exists
on the file-system is properly prohibited. Again, the error message
could be more specific and point to the appropriate place. However,
that's reasonably far down the wish-list.

[ Also note the nifty difference between extension already available
vs extension already exists. The former seems to mean the template
exists, while the latter refers to the instantiation. ]

However, the other way around cannot be prevented by Postgres: Files
representing an extension (template, if you want) can always be 

Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-07-05 Thread Josh Berkus
All,

 I think that's way over the top. Can we all just cool down a bit? I
 really don't see Josh as Stalin.
 
 I don't either.  It is the judging others efforts that concerns me.
 

I agree that publishing the committer portion of the list was a mistake,
and will not include it in the future CFM instructions.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Millisecond-precision connect_timeout for libpq

2013-07-05 Thread ivan babrou
If you can figure out that postgresql is overloaded then you may
decide what to do faster. In our app we have very strict limit for
connect time to mysql, redis and other services, but postgresql has
minimum of 2 seconds. When processing time for request is under 100ms
on average sub-second timeouts matter.

On 5 July 2013 22:20, Tom Lane t...@sss.pgh.pa.us wrote:
 ivan babrou ibob...@gmail.com writes:
 Hi, guys! I made a quick patch to support floating number in
 connect_timeout param for libpq.

 What exactly is the use case for that?  It seems like extra complication
 for something with little if any real-world usefulness.

 regards, tom lane



-- 
Regards, Ian Babrou
http://bobrik.name http://twitter.com/ibobrik skype:i.babrou


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Millisecond-precision connect_timeout for libpq

2013-07-05 Thread Tom Lane
ivan babrou ibob...@gmail.com writes:
 If you can figure out that postgresql is overloaded then you may
 decide what to do faster. In our app we have very strict limit for
 connect time to mysql, redis and other services, but postgresql has
 minimum of 2 seconds. When processing time for request is under 100ms
 on average sub-second timeouts matter.

If you are issuing a fresh connection for each sub-100ms query, you're
doing it wrong anyway ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Millisecond-precision connect_timeout for libpq

2013-07-05 Thread Josh Berkus
On 07/05/2013 12:26 PM, Tom Lane wrote:
 ivan babrou ibob...@gmail.com writes:
 If you can figure out that postgresql is overloaded then you may
 decide what to do faster. In our app we have very strict limit for
 connect time to mysql, redis and other services, but postgresql has
 minimum of 2 seconds. When processing time for request is under 100ms
 on average sub-second timeouts matter.
 
 If you are issuing a fresh connection for each sub-100ms query, you're
 doing it wrong anyway ...

It's fairly common with certain kinds of apps, including Rails and PHP.
 This is one of the reasons why we've discussed having a kind of
stripped-down version of pgbouncer built into Postgres as a connection
manager.  If it weren't valuable to be able to relocate pgbouncer to
other hosts, I'd still say that was a good idea.

Ivan would really strongly benefit from running pgbouncer on his
appservers instead of connecting directly to Postgres.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sepgsql and materialized views

2013-07-05 Thread Noah Misch
On Fri, Feb 08, 2013 at 02:51:40PM +0100, Kohei KaiGai wrote:
 2013/2/7 Kevin Grittner kgri...@ymail.com:
  Kohei KaiGai kai...@kaigai.gr.jp wrote:
 
  So, I'd like to review two options.
  1) we uses db_table object class for materialized-views for
  a while, until selinux-side become ready. Probably, v9.3 will
  use db_table class then switched at v9.4.
  2) we uses db_materialized_view object class from the
  begining, but its permission checks are ignored because
  installed security policy does not support this class yet.
 
  My preference is 2), even though we cannot apply label
  based permission checks until selinux support it, because
  1) makes troubles when selinux-side become ready to
  support new db_materialized_view class. Even though
  policy support MV class, working v9.3 will ignore the policy.
 
  Let me ask selinux folks about this topic also.
 
  To make sure I understand, the current patch is consistent with
  option 1?
 
 I believe so, even though I didn't take deep and detailed investigation
 yet.
 
   It sounds like I have code from a prior version of the
  patch pretty close to what you describe for option 2, so that can
  be put back in place if you confirm that as the preferred option.
 
 As above, I'd like to suggest the option 2.
 Could you once remove the updates related to contrib/sepgsql?
 I'll have a discussion about new materialized_view object class
 on selinux list soon, then I'll submit a patch towards contrib/sepgsql
 according to the consensus here.

Has this progressed?

Should we consider this a 9.3 release blocker?  sepgsql already has a red box
warning about its limitations, so adding the limitation that materialized
views are unrestricted wouldn't be out of the question.

Thanks,
nm

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Millisecond-precision connect_timeout for libpq

2013-07-05 Thread Joshua D. Drake


On 7/5/2013 1:01 PM, Josh Berkus wrote:

If you are issuing a fresh connection for each sub-100ms query, you're
doing it wrong anyway ...
It's fairly common with certain kinds of apps, including Rails and PHP.
  This is one of the reasons why we've discussed having a kind of
stripped-down version of pgbouncer built into Postgres as a connection
manager.  If it weren't valuable to be able to relocate pgbouncer to
other hosts, I'd still say that was a good idea.


No kidding. I think a lot of -hackers forget that the web rules here and 
the web is stateless, which means a huge performance loss for postgresql 
unless we add yet another piece of software. Pre-forking here would 
really help us.


JD




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Eliminating PD_ALL_VISIBLE, take 2

2013-07-05 Thread Jeff Davis
On Tue, 2013-07-02 at 10:12 -0700, Jeff Davis wrote:
 Regardless, this is at least a concrete issue that I can focus on, and I
 appreciate that. Are scans of small tables the primary objection to this
 patch, or are there others? If I solve it, will this patch make real
 progress?

I had an idea here:

What if we keep PD_ALL_VISIBLE, but make it more like other hints, and
make it optional? If a page is all visible, either or both of the VM bit
or PD_ALL_VISIBLE could be set (please suspend disbelief for a moment).

Then, we could use a heuristic, like setting PD_ALL_VISIBLE in the first
256 pages of a relation; but in later pages, only setting it if the page
is already dirty for some other reason.

That has the following benefits:

1. Eliminates the worry over contention related to scans, because we
wouldn't need to use the VM for small tables. And I don't think anyone
was worried about using the VM on a large table scan.

2. Still avoids dirtying lots of pages after a data load. I'm not
worried if a few MB of data is rewritten on a large table.

3. Eliminates the complex way in which we (ab)use WAL and the recovery
mechanism to keep PD_ALL_VISIBLE and the VM bit in sync.

Of course, there's a reason that PD_ALL_VISIBLE is not like a normal
hint: we need to make sure that inserts/updates/deletes clear the VM
bit. But my patch already addresses that by keeping the VM page pinned.

That has some weaknesses: as Heikki pointed out[1], you can defeat the
cache of the pin by randomly seeking between 512MB regions during an
update (would only be noticable if it's all in shared buffers already,
of course). But even in that case, it was a fairly modest degradation
(20%), so overall this seems like a fairly minor drawback.

Thoughts?

Regards,
Jeff Davis

[1] http://www.postgresql.org/message-id/50fd11c5.1030...@vmware.com




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: query result history in psql

2013-07-05 Thread Josh Berkus
On 07/02/2013 06:16 PM, Robert Haas wrote:
 I'm kinda not all that convinced that this feature does anything
 that's actually useful.  If you want to save your query results, you
 can just stick CREATE TEMP TABLE ans AS in front of your query.
 What does that not give you that this gives you?

Convenience.  Think of the times when you were doing a quick check on
some query result from several different queries and wanted to flip back
and forth between them, but you can't do that by scrolling because the
pager has the query result and hides it from terminal scrolling.  If we
had this feature, I'd use it a lot, personally.

My take on the issues discussed:

Interactive Mode: I personally don't see value in this, and wouldn't use
it if it existed.  Plus even if there is value in it, it could be added
later on, so shouldn't block this patch.

Finding Query Results: I don't find the approach of ans01/ans02/ans03
useful.  For any place where I really need this feature, I'm going to
have enough buffered queries that there's no way I can remember which is
which.  I don't, however, have an immediate solution for something which
would be overall easier.  Maybe a prompt after each query for what name
to put it in the history as?  Not sure.

ans: I agree that this is not intuitive for most DBAs.  The other
applications which use that abbreviation do not have sufficient overlap
with PostgreSQL for it to be natural.  What about just result?

Size Limits: before this becomes a PostgreSQL feature, we really do need
to have some limits, both for total memory size, and for number of saved
query result sets.  Otherwise we'll have lots of people crashing their
clients because they forgot that result history was on.

Also, I'd like to think some about how this could, potentially, in the
future tie in to being able to dispatch asyncronous queries from psql.
If we have a query result cache, it's one short step to allowing that
result cache to be populated asyncrhonously.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Millisecond-precision connect_timeout for libpq

2013-07-05 Thread Tom Lane
Joshua D. Drake j...@commandprompt.com writes:
 On 7/5/2013 1:01 PM, Josh Berkus wrote:
 This is one of the reasons why we've discussed having a kind of
 stripped-down version of pgbouncer built into Postgres as a connection
 manager.  If it weren't valuable to be able to relocate pgbouncer to
 other hosts, I'd still say that was a good idea.

 No kidding. I think a lot of -hackers forget that the web rules here and 
 the web is stateless, which means a huge performance loss for postgresql 
 unless we add yet another piece of software. Pre-forking here would 
 really help us.

Pre-forking, per se, wouldn't be that much help IMO.  You really want to
connect to a backend that's already loaded its catalog caches etc.  So a
connection pooler is the right solution, not least because you can get
it today.  Whether we should integrate a pooler into core is more of a
project management issue than a technical one, I think.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Proposal: template-ify (binary) extensions

2013-07-05 Thread Markus Wanner
Hi,

let me elaborate on an idea I had to streamline extension templates. As
I wrote in my recent review [1], I didn't have the mental model of a
template in mind for extensions, so far.

However, writing the review, I came to think of it. I certainly agree
that extensions which do not carry a binary executable can be considered
templates. The SQL scripts can be deleted after creation of the
extension in the database, because the objects created with them are an
instantiation in the database itself. They do not depend on the template.

That's not the case for compiled, executable code that usually lives in
a shared library on the filesystem. There, the instantiation of the
template merely consists of a link to the library on the file-system.
If you remove that file, you break the extension.

One way to resolve that - and that seems to be the direction Dimitri's
patch is going - is to make the extension depend on its template. (And
thereby breaking the mental model of a template, IMO. In the spirit of
that mental model, one could argue that code for stored procedures
shouldn't be duplicated, but instead just reference its ancestor.)

The other possible resolution is to make all extensions fit the template
model, i.e. make extensions independent of their templates.

This obviously requires Postgres to internalize the compiled binary code
of the library upon instantiation. (Storing the binary blobs by hash in
a shared catalog could easily prevent unwanted duplication between
databases.)

Advantages:
 - Consistent mental model: template
 - Closes the gap between how SQL scripts and binary parts of
   an extension are handled. (Or between binary and non-binary
   extensions.)
 - Compatibility and co-existence between file-system and
   system catalog based extension templates is simplified.
 - Independence of extensions from library files shipped by
   3rd party extensions (those would only ship a template, not
   the extension per se).
 - Paves the way to uploading extensions that carry a binary,
   executable payload via libpq.

Challenges:
 - CPU Architecture must be taken into account for backups. Restoring
   a backup on a different architecture would still require the
   (external) binary code for that specific architecture, because we
   cannot possibly store binaries for all architectures.
 - A bit of a mind shift for how binary extensions work.


Regards

Markus Wanner



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: extension template

2013-07-05 Thread Markus Wanner
On 07/05/2013 09:05 PM, Markus Wanner wrote:
 The patch has already
 been marked as 'returned with feedback', and I can support that
 resolution (for this CF).

Oops.. I just realize it's only set to waiting on author, now. I guess
I confused the two states. Please excuse my glitch.

Dimitri, do you agree to resolve with returned with feedback for this
CF? Or how would you like to proceed?

Josh, thanks for linking the review in the CF.

Regards

Markus Wanner


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-07-05 Thread Peter Geoghegan
On Thu, Jul 4, 2013 at 6:16 AM, Andrew Dunstan and...@dunslane.net wrote:
 You know what this reminds me of --- early communist movements.  Members
 were scrutinized to see if they were working hard enough for the
 cause, and criticized/shamed/punished if they were not.  The leaders
 became tyrants, and justified the tyranny because they were creating a
 better society.  We all know that didn't end well.


 I think that's way over the top. Can we all just cool down a bit? I really
 don't see Josh as Stalin.

+1. I think that Josh misjudged things in starting this thread, but
there is no need for this kind of rhetoric. It helps no one.


-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sepgsql and materialized views

2013-07-05 Thread Tom Lane
Noah Misch n...@leadboat.com writes:
 On Fri, Feb 08, 2013 at 02:51:40PM +0100, Kohei KaiGai wrote:
 I'll have a discussion about new materialized_view object class
 on selinux list soon, then I'll submit a patch towards contrib/sepgsql
 according to the consensus here.

 Has this progressed?

 Should we consider this a 9.3 release blocker?  sepgsql already has a red box
 warning about its limitations, so adding the limitation that materialized
 views are unrestricted wouldn't be out of the question.

Definitely -1 for considering it a release blocker.  If KaiGai-san can
come up with a fix before we otherwise would release 9.3, that's great,
but there's no way that sepgsql has a large enough user community to
justify letting it determine the release schedule.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request for Patch Feedback: Lag Lead Window Functions Can Ignore Nulls

2013-07-05 Thread Jeff Davis
On Mon, 2013-07-01 at 07:40 -0400, Robert Haas wrote:
 On Sun, Jun 30, 2013 at 10:07 PM, Nicholas White n.j.wh...@gmail.com wrote:
  I've attached another iteration of the patch that fixes the multiple-window
  bug and adds ( uses) a function to create a Bitmapset using a custom
  allocator. I don't think there's any outstanding problems with it now.
 
 I think the right way to do this is to temporarily set the current
 memory context to winobj-winstate-partcontext while creating or
 manipulating the Bitmapset and restore it afterwards.  Maybe someone
 will say that's a modularity violation, but surely this is worse...

I think we should get rid of the bitmapset entirely. For one thing, we
want to be able to support large frames, and the size of the allocation
for the bitmap is dependent on the size of the frame. It would take a
very large frame for that to matter, but conceptually, it doesn't seem
right to me.

Instead of the bitmapset, we can keep track of two offsets, and the
number of rows in between which are non-NULL. That only works with a
constant offset; but I'm not inclined to optimize for the special case
involving large frames, variable offset which always happens to be
large, and IGNORE NULLS.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-07-05 Thread Jeff Janes
On Wed, Jul 3, 2013 at 12:03 PM, Christopher Browne cbbro...@gmail.comwrote:

 On Wed, Jul 3, 2013 at 2:24 PM, Cédric Villemain 
 ced...@2ndquadrant.comwrote:

  Clearly I ticked off a bunch of people by publishing the list.  On the
  other hand, in the 5 days succeeding the post, more than a dozen
  additional people signed up to review patches, and we got some of the
  ready for committer patches cleared out -- something which nothing
  else I did, including dozens of private emails, general pleas to this
  mailing list, mails to the RRReviewers list, served to accomplish, in
  this or previous CFs.

 Others rules appeared, like the 5 days limit.


The limit was previously 4 days (at least according to
http://wiki.postgresql.org/wiki/RRReviewers) but I think that that was
honored almost exclusively in the breach.  I don't have a sense for how the
5 day limit is working.  If it is working, great.  If not, I would advocate
lengthening it--having a limit specified but not generally held to is
counterproductive.  I know that I, and at least one other potential
reviewer, won't ask to be assigned a random patch because we have no
confidence we can do an adequate review of a random patch within a
contiguous 5 day window.

On the other hand, I could always just go to the open commitfest (rather
than the in progress one), pick something at random myself, and have up to
3 months to review it.  I just don't do have the discipline to do that, at
least not often.



 To me it outlines that some are abusing the CF app and pushing there
 useless
 patches (not still ready or complete, WIP, ...


 Seems to me that useless overstates things, but it does seem fair to
 say that some patches are not sufficiently well prepared to be efficiently
 added into Postgres.


I think that there will always be contributions that need some hand-holding
of some form.  Isn't that what returned with feedback is for?



  So, as an experiment, call it a mixed result.  I would like to have some
  other way to motivate reviewers than public shame.  I'd like to have
  some positive motivations for reviewers, such as public recognition by
  our project and respect from hackers, but I'm doubting that those are
  actually going to happen, given the feedback I've gotten on this list to
  the idea.

 You're looking at a short term, big effect.
 And long term ? Will people listed still be interested to participate in a
 project which stamps people ?

 With or without review, it's a shame if people stop proposing patches
 because
 they are not sure to get time to review other things *in time*.


 Well, if the project is hampered by not being able to get *all* the
 changes that people imagine that they want to put in, then we have a
 real problem of needing a sort of triage to determine which changes
 will be accepted, and which will not.

 Perhaps we need an extra status in the CommitFest application, namely
 one that characterizes:
Insufficiently Important To Warrant Review

 That's too long a term.  Perhaps Not Review-worthy expresses it better?



I don't think that this would really be an improvement.  Someone still has
to spend enough time looking at the patch to make the decision that it
falls into one of those categories.  Having spent sufficient time to do
that, what they did is ipso facto a review, and they should just set it as
either rejected (not important or idea unworkable) or RwF (idea is
workable, but the given implementation is not).

Cheers,

Jeff


Re: [HACKERS] Request for Patch Feedback: Lag Lead Window Functions Can Ignore Nulls

2013-07-05 Thread Jeff Davis
On Fri, 2013-06-28 at 13:26 -0400, Robert Haas wrote:
 I haven't really reviewed the windowing-related code in depth; I
 thought Jeff might jump back in for that part of it.  Jeff, is that
 something you're planning to do?

Yes, getting back into this patch now after a bit of delay.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Millisecond-precision connect_timeout for libpq

2013-07-05 Thread ivan babrou
On 5 July 2013 23:26, Tom Lane t...@sss.pgh.pa.us wrote:
 ivan babrou ibob...@gmail.com writes:
 If you can figure out that postgresql is overloaded then you may
 decide what to do faster. In our app we have very strict limit for
 connect time to mysql, redis and other services, but postgresql has
 minimum of 2 seconds. When processing time for request is under 100ms
 on average sub-second timeouts matter.

 If you are issuing a fresh connection for each sub-100ms query, you're
 doing it wrong anyway ...

 regards, tom lane

In php you cannot persist connection between requests without worrying
about transaction state. We don't use postgresql for every sub-100ms
query because it can block the whole request for 2 seconds. Usually it
takes 1.5ms to connect, btw.

Can you tell me why having ability to specify more accurate connect
timeout is a bad idea?

--
Regards, Ian Babrou
http://bobrik.name http://twitter.com/ibobrik skype:i.babrou


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Request for Patch Feedback: Lag Lead Window Functions Can Ignore Nulls

2013-07-05 Thread Jeff Davis
On Mon, 2013-07-01 at 18:20 -0400, Nicholas White wrote:
  pg_get_viewdef() needs to be updated
 
 Ah, good catch - I've fixed this in the attached. I also discovered
 that there's a parent-child hierarchy of WindowDefs (using
 relname-name), so instead of cloning the WindowDef (in parse_agg.c)
 if the frameOptions are different (e.g. by adding the ignore-nulls
 flag) I create a child of the WindowDef and override the frameOptions.
 This has the useful side-effect of making pg_get_viewdef work as
 expected (the previous iteration of the patch produced a copy of the
 window definintion, not the window name, as it was using a nameless
 clone), although the output has parentheses around the view name:
 
A couple comments:
 * We shouldn't create an arbitrary number of duplicate windows when
many aggregates are specified with IGNORE NULLS.
 * It's bad form to modify a list while iterating through it. This is
just a style issue because there's a break afterward, anyway.

Also, I'm concerned that we're changing a reference of the form:
  OVER w
into:
  OVER (w)
in a user-visible way. Is there a problem with having two windowdefs in
the p_windowdefs list with the same name and different frameOptions?

I think you could just change the matching criteria to be a matching
name and matching frameOptions. In the loop, if you find a matching name
but frameOptions doesn't match, keep a pointer to the windowdef and
create a new one at the end of the loop with the same name.

You'll have to be a little careful that any other code knows that names
can be duplicated in the list though.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Preventing tuple-table leakage in plpgsql

2013-07-05 Thread Noah Misch
On Fri, Jul 05, 2013 at 02:47:06PM -0400, Tom Lane wrote:
 Bug #8279 exhibits an intra-transaction memory leak in a plpgsql
 function that repeatedly traps an error.  The cause of the leak is that
 the SPITupleTable created during exec_stmt_execsql is never cleaned up.
 (It will get cleaned up at function exit, but that's not soon enough in
 this usage.)

 One approach we could take is to insert PG_TRY blocks to ensure that
 SPI_freetuptable is called on error exit from such functions.  (plpython
 seems to have adopted this solution already.)  However, that tends to be
 pretty messy notationally, and possibly could represent a noticeable
 performance hit depending on what you assume about the speed of
 sigsetjmp().  Moreover, fixing known trouble spots this way does nothing
 to guard against similar errors elsewhere.

I, too, find that strategy worth avoiding as long as practical.

 So I'm inclined to propose that SPI itself should offer some mechanism
 for cleaning up tuple tables at subtransaction abort.  We could just
 have it automatically throw away tuple tables made in the current
 subtransaction, or we could allow callers to exercise some control,
 perhaps by calling a function that says don't reclaim this tuple table
 automatically.  I'm not sure if there's any real use-case for such a
 call though.

I suppose that would be as simple as making spi_dest_startup() put the
tuptabcxt under CurTransactionContext?  The function to prevent reclamation
would then just do a MemoryContextSetParent().

Is there good reason to believe that SPI tuptables are the main interesting
PL/pgSQL allocation to make a point of freeing promptly, error or no error?  I
wonder if larger sections of pl_exec.c could run under CurTransactionContext.

 It's also not very clear to me if tuple tables should be thrown away
 automatically at subtransaction commit.  We could do that, or leave
 things alone, or add some logic to emit warning bleats about unreleased
 tuple tables (comparable to what is done for many other resource types).
 If we change anything about the commit case, I think we run significant
 risk of breaking third-party code that works now, so maybe it's best
 to leave that alone.

That's not clear to me, either.  The safe thing would be to leave the default
unchanged but expose an API to override the tuptable parent context.
Initially, only PL/pgSQL would use it.

 It might also be worth debating whether to back-patch such a fix.
 This issue has been there ever since plpgsql grew the ability to trap
 errors, so the lack of previous reports might mean that it's not worth
 taking risks to fix such leaks in back branches.

I agree that could go either way; let's see what the patch looks like.

Thanks,
nm

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add regression tests for COLLATE

2013-07-05 Thread Andrew Dunstan


On 07/03/2013 03:16 PM, Andrew Dunstan wrote:


On 07/03/2013 03:00 PM, Alvaro Herrera wrote:

Robert Haas escribió:


I agree.  I think it'd be a good idea to get the buildfarm to run the
existing collate.utf8.linux test regularly on platforms where it
passes,

+1



I can probably whip up a module for it. (I created the buildfarm 
module system so people could create their own addons, but sadly 
nobody seems to have risen to the bait.)





See 
https://github.com/PGBuildFarm/client-code/blob/626a537034f89022b133badf6093ba243f6072aa/PGBuild/Modules/TestCollateLinuxUTF8.pm


I have added it to crake's set of modules to run, See 
http://www.pgbuildfarm.org/cgi-bin/show_stage_log.pl?nm=crakedt=2013-07-05%2023%3A26%3A17stg=FileTextArrayFDW-installcheck-en_US.utf8, 
and it will be in the next buildfarm client release.


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add regression tests for COLLATE

2013-07-05 Thread Andrew Dunstan


On 07/05/2013 07:44 PM, Andrew Dunstan wrote:


On 07/03/2013 03:16 PM, Andrew Dunstan wrote:


On 07/03/2013 03:00 PM, Alvaro Herrera wrote:

Robert Haas escribió:


I agree.  I think it'd be a good idea to get the buildfarm to run the
existing collate.utf8.linux test regularly on platforms where it
passes,

+1



I can probably whip up a module for it. (I created the buildfarm 
module system so people could create their own addons, but sadly 
nobody seems to have risen to the bait.)





See 
https://github.com/PGBuildFarm/client-code/blob/626a537034f89022b133badf6093ba243f6072aa/PGBuild/Modules/TestCollateLinuxUTF8.pm


I have added it to crake's set of modules to run, See 
http://www.pgbuildfarm.org/cgi-bin/show_stage_log.pl?nm=crakedt=2013-07-05%2023%3A26%3A17stg=FileTextArrayFDW-installcheck-en_US.utf8, 
and it will be in the next buildfarm client release.


Er I meant 
http://www.pgbuildfarm.org/cgi-bin/show_stage_log.pl?nm=crakedt=2013-07-05%2023%3A26%3A17stg=install-check-collate-en_US.utf8


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add regression tests for COLLATE

2013-07-05 Thread Andres Freund
On 2013-07-03 14:17:20 -0400, Robert Haas wrote:
 I agree.  I think it'd be a good idea to get the buildfarm to run the
 existing collate.utf8.linux test regularly on platforms where it
 passes, but this particular approach is valuable mostly because
 (supposedly) it was going to work everywhere.  However, it doesn't.

Hm. What about extending the existing resultmap logic to make that
possible in general? E.g. don't run the test if a file is mapped to an
empty expected file.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_filedump 9.3: checksums (and a few other fixes)

2013-07-05 Thread Satoshi Nagayasu

Hi,

I have reviewed this patch as a CF reviewer.

(2013/06/27 4:07), Jeff Davis wrote:

On Mon, 2013-06-24 at 20:34 -0400, Josh Kupershmidt wrote:

This patch is in the current CommitFest, does it still need to be
reviewed? If so, I notice that the version in pgfoundry's CVS is
rather different than the version the patch seems to have been built
against (presumably the pg_filedump-9.2.0.tar.gz release), and
conflicts in several places with cvs tip.


Rebased against CVS tip; attached.


It looks fine, but I have one question here.

When I run pg_filedump with -k against a database cluster which
does not support checksums, pg_filedump produced checksum error as
following. Is this expected or acceptable?

-
***
* PostgreSQL File/Block Formatted Dump Utility - Version 9.3.0
*
* File: /tmp/pgsql/data/base/16384/16397
* Options used: -k
*
* Dump created on: Sat Jul  6 10:32:15 2013
***

Block0 
Header -
 Block Offset: 0x Offsets: Lower 268 (0x010c)
 Block: Size 8192  Version4Upper 384 (0x0180)
 LSN:  logid  0 recoff 0x  Special  8192 (0x2000)
 Items:   61  Free Space:  116
 Checksum: 0x  Prune XID: 0x  Flags: 0x ()
 Length (including item array): 268

 Error: checksum failure: calculated 0xf797.

Data --
 Item   1 -- Length:  121  Offset: 8064 (0x1f80)  Flags: NORMAL
 Item   2 -- Length:  121  Offset: 7936 (0x1f00)  Flags: NORMAL
 Item   3 -- Length:  121  Offset: 7808 (0x1e80)  Flags: NORMAL
-

Please check attached script to reproduce it.

Also, I have update the help message and README.
Please check attached patch.

Regards,
--
Satoshi Nagayasu sn...@uptime.jp
Uptime Technologies, LLC. http://www.uptime.jp
PGHOME=/tmp/pgsql
PGPORT=15433

function build_check {
echo make PGSQL_INCLUDE_DIR=...
make PGSQL_INCLUDE_DIR=../../src/include clean
make PGSQL_INCLUDE_DIR=../../src/include all
make PGSQL_INCLUDE_DIR=../../src/include install
make PGSQL_INCLUDE_DIR=../../src/include clean

echo make -f Makefile.contrib...
make -f Makefile.contrib clean
make -f Makefile.contrib all
make -f Makefile.contrib install
make -f Makefile.contrib clean

echo make -f Makefile.contrib USE_PGXS=1...
PATH=${PGHOME}/bin:$PATH
make -f Makefile.contrib USE_PGXS=1 clean
make -f Makefile.contrib USE_PGXS=1 all
make -f Makefile.contrib USE_PGXS=1 install
make -f Makefile.contrib USE_PGXS=1 clean
}

function do_builddb {
INITDB_OPTS=$1
killall -9 postmaster postgres
rm -rf ${PGHOME}/data

initdb ${INITDB_OPTS} --no-locale -D ${PGHOME}/data
pg_ctl -w -D ${PGHOME}/data start -o -p ${PGPORT}
createdb -p ${PGPORT} testdb
pgbench -p ${PGPORT} -i testdb

psql -A -t -p ${PGPORT} testdbEOF  file
select '${PGHOME}/data/' || pg_relation_filepath(oid) from pg_class where 
relname like 'pgbench%';
EOF
}

function builddb_checksum_enabled {
do_builddb -k
}

function builddb_checksum_disabled {
do_builddb 
}

function test_not_verify_checksum {
LOG=$1
sed 's/^/pg_filedump /'  file  _test.sh
sh _test.sh  $LOG
}

function test_verify_checksum {
LOG=$1
sed 's/^/pg_filedump -k /'  file  _test.sh
sh _test.sh  $LOG
}

build_check

builddb_checksum_enabled
test_not_verify_checksum test_enabled_not_verify.log
test_verify_checksum test_enabled_verify.log

builddb_checksum_disabled
test_not_verify_checksum test_disabled_not_verify.log
test_verify_checksum test_disabled_verify.log
diff --git a/contrib/pg_filedump/README.pg_filedump 
b/contrib/pg_filedump/README.pg_filedump
index 63d086f..b3050cd 100644
--- a/contrib/pg_filedump/README.pg_filedump
+++ b/contrib/pg_filedump/README.pg_filedump
@@ -2,7 +2,7 @@ pg_filedump - Display formatted contents of a PostgreSQL heap, 
index,
   or control file.
 
 Copyright (c) 2002-2010 Red Hat, Inc.
-Copyright (c) 2011-2012, PostgreSQL Global Development Group
+Copyright (c) 2011-2013, PostgreSQL Global Development Group
 
 This program is free software; you can redistribute it and/or modify
 it under the terms of the GNU General Public License as published by
@@ -59,10 +59,10 @@ not require any manual adjustments of the Makefile.
 
 Invocation:
 
-pg_filedump [-abcdfhixy] [-R startblock [endblock]] [-S blocksize] file
+pg_filedump [-abcdfhikxy] [-R startblock [endblock]] [-S blocksize] file
 
-Defaults are: relative addressing, range of the entire file, block size
-  as listed on block 0 in the file
+Defaults are: relative addressing, range of the entire file, block
+   

Re: [HACKERS] [PATCH] Fix conversion for Decimal arguments in plpython functions

2013-07-05 Thread Peter Eisentraut
On Fri, 2013-06-28 at 22:28 +0200, Szymon Guz wrote:
 I agree, we can check both. This is quite a nice patch now, I've reviewed
 it, all tests pass, works as expected. I think it is ready for committing.

Committed.

Very nice to get that finally fixed.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Fix conversion for Decimal arguments in plpython functions

2013-07-05 Thread Peter Eisentraut
On Fri, 2013-06-28 at 17:29 -0300, Claudio Freire wrote:
 Why not forego checking of the type, and instead check the interface?
 
 plpy.info(x.as_tuple())
 
 Should do.
 
  d  = decimal.Decimal((0,(3,1,4),-2))
  d.as_tuple()
 DecimalTuple(sign=0, digits=(3, 1, 4), exponent=-2) 

I think that potentially exposes us to more version differences and
such, and it doesn't really add much value in the test output.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Fix conversion for Decimal arguments in plpython functions

2013-07-05 Thread Claudio Freire
On Fri, Jul 5, 2013 at 11:47 PM, Peter Eisentraut pete...@gmx.net wrote:
 On Fri, 2013-06-28 at 17:29 -0300, Claudio Freire wrote:
 Why not forego checking of the type, and instead check the interface?

 plpy.info(x.as_tuple())

 Should do.

  d  = decimal.Decimal((0,(3,1,4),-2))
  d.as_tuple()
 DecimalTuple(sign=0, digits=(3, 1, 4), exponent=-2)

 I think that potentially exposes us to more version differences and
 such, and it doesn't really add much value in the test output.

Why?

That interface is documented, and Python guys aren't likely to change
it, not even in alternative implementations.

Besides, this is the zen way of checking. Python uses duck typing,
so checking the actual class of stuff is frowned upon. The Pythonic
way to write tests is to check the expected behavior of returned
objects. In this case, that it has an as_tuple that returns the right
thing.

You could also check other interesting behavior if you want, including
its string representation, that it can be converted to float, that two
decimals can be operated upon and that they preserve the right amount
of precision, etc..


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] review: WITH CHECK OPTION

2013-07-05 Thread Pavel Stehule
Hello

0) we would this feature
* it is finalization of ANSI SQL feature, that is complete now
* it is useful feature, than moves work with updatable views close to heaps

1) patch is cleanly applied
2) there are no new warnings
3) all regress tests was passed - there are enough tests for this feature
4) doc was rebuilt without problems and it is well documented
5) a implemented feature is ANSI SQL compliant
6) code is clean, well commented and well formatted
7) it works as expected

This patch is ready for commit

Regards

Pavel Stehule


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Changing recovery.conf parameters into GUCs

2013-07-05 Thread Michael Paquier
On Sat, Jul 6, 2013 at 3:49 AM, Josh Berkus j...@agliodbs.com wrote:
 Robert, Simon, All,

 On 04/01/2013 04:51 AM, Robert Haas wrote: On Thu, Mar 28, 2013 at
 11:48 AM, Simon Riggs si...@2ndquadrant.com wrote:
 a)  recovery parameters are made into GUCs (for which we have a patch
 from Fujii)
 b)  all processes automatically read recovery.conf as the last step in
 reading configuration files, if it exists, even if data_directory
 parameter is in use (attached patch)
 c)  we trigger archive recovery by the presence of either
 recovery.conf or recovery.trigger in the data directory. At the end,
 we rename to recovery.done just as we do now. This means that any
 parameters put into recovery.conf will not be re-read when we SIGHUP
 after end of recovery. Note that recovery.trigger will NOT be read for
 parameters and is assumed to be zero-length.
 (minor patch required)

 I still prefer Greg Smith's proposal.

 http://www.postgresql.org/message-id/4ee91248.8010...@2ndquadrant.com

 So, this seems to still be stalled.  Is there a way forward on this
 which won't cause us to wait *another* version before we have working
 replication GUCs?
Yeah, it would be good to revive this thread now, which is the
beginning of the development cycle. As of now, just to recall
everybody, an agreement on this patch still needs to be found... Simon
is concerned with backward compatibility. Greg presented a nice spec
some time ago (Robert and I liked it) which would break backward
compatibility but allow to begin with a fresh infrastructure.
--
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sepgsql and materialized views

2013-07-05 Thread Kohei KaiGai
Unfortunately, I could not get consensus of design on selinux policy side.
Even though my opinion is to add individual security class for materialized
view to implement refresh permission, other people has different opinion.
So, I don't want it shall be a blocker of v9.3 to avoid waste of time.
Also, I'll remind selinux community on this issue again, and tries to handle
in another way from what I proposed before.

Thanks,

2013/7/5 Tom Lane t...@sss.pgh.pa.us:
 Noah Misch n...@leadboat.com writes:
 On Fri, Feb 08, 2013 at 02:51:40PM +0100, Kohei KaiGai wrote:
 I'll have a discussion about new materialized_view object class
 on selinux list soon, then I'll submit a patch towards contrib/sepgsql
 according to the consensus here.

 Has this progressed?

 Should we consider this a 9.3 release blocker?  sepgsql already has a red box
 warning about its limitations, so adding the limitation that materialized
 views are unrestricted wouldn't be out of the question.

 Definitely -1 for considering it a release blocker.  If KaiGai-san can
 come up with a fix before we otherwise would release 9.3, that's great,
 but there's no way that sepgsql has a large enough user community to
 justify letting it determine the release schedule.

 regards, tom lane



-- 
KaiGai Kohei kai...@kaigai.gr.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: PL/Python: Convert numeric to Decimal

2013-07-05 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 PL/Python: Convert numeric to Decimal

Assorted buildfarm members don't like this patch.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_filedump 9.3: checksums (and a few other fixes)

2013-07-05 Thread Jeff Davis
On Sat, 2013-07-06 at 10:30 +0900, Satoshi Nagayasu wrote:
 Hi,
 
 It looks fine, but I have one question here.
 
 When I run pg_filedump with -k against a database cluster which
 does not support checksums, pg_filedump produced checksum error as
 following. Is this expected or acceptable?

Thank you for taking a look. That is expected, because there is not a
good way to determine whether the file was created with checksums or
not. So, we rely on the user to supply (or not) the -k flag.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: PL/Python: Convert numeric to Decimal

2013-07-05 Thread Claudio Freire
On Sat, Jul 6, 2013 at 2:39 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Peter Eisentraut pete...@gmx.net writes:
 PL/Python: Convert numeric to Decimal

 Assorted buildfarm members don't like this patch.


Do you have failure details?

This is probably an attempt to operate decimals vs floats.

Ie: Decimal('3.0')  0 works, but Decimal('3.0')  1.3 doesn't
(decimal is explicitly forbidden from operating on floats, some design
decision that can only be disabled in 3.3).


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers