Re: [HACKERS] augmenting MultiXacts to improve foreign keys
On Tue, Aug 09, 2011 at 09:41:00PM +0200, Florian Pflug wrote: Couldn't we simply give the user a way to specify, per column, whether or not that column is a KEY column? Creating a foreign key constraint could still implicitly mark all referenced columns as KEY columns, but columns would no longer be KEY columns simply because they're part of a UNIQUE constraint. Users would be free to add arbitrary columns to the set of KEY columns by doing ALTER TABLE ALTER COLUMN SET KEY. What would be the use case for manually expanding the set of key columns? If you have a heavily-updated table referenced by slowly-changing tables, it might pay off to disable the optimization entirely. That would be an esoteric escape hatch to provide for an uncommon use case, though. -- Noah Mischhttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Problem with sources.
Success ! I can't use git protocol, but github via http works fine. Thank you Andrew :) pasman -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] augmenting MultiXacts to improve foreign keys
On Tue, Aug 09, 2011 at 01:01:04PM -0400, Alvaro Herrera wrote: KEY UPDATEFOR UPDATE FOR SHARE KEY SHARE KEY UPDATE X XX X FOR UPDATE X XX FOR SHAREX X KEY SHAREX DELETE always grabs KEY UPDATE lock on a tuple. UPDATE grabs KEY UPDATE if the key is being modified, otherwise FOR UPDATE. SELECT FOR UPDATE grabs FOR UPDATE. SELECT FOR SHARE grabs FOR SHARE. SELECT FOR KEY SHARE grabs FOR KEY SHARE. This is the mode used by RI triggers. Note that this means that UPDATE would always have to check whether key columns are being modified. (I loosely talk about key columns here referring to columns involving unique indexes. On tables with no unique indexes, I think this would have to mean all columns, but I haven't thought much about this bit.) On tables with no key columns, we can skip the datum comparisons and use KEY UPDATE, because it does not matter: nobody would try to take KEY SHARE anyway. (If KEY SHARE is SQL-exposed, a manual acquisition remains possible. It does not seem important to cater to that, though.) Key columns should be those columns actually referenced by incoming foreign keys; the relcache can maintain that list. This was less important for the previous version, which didn't compare datums prior to encountering a live KEY LOCK. It will now be more important to avoid that overhead for tables lacking incoming FKs. To implement this, we need to augment MultiXact to store the lock type that each comprising Xid holds on the tuple. Two bits per Xid are needed. My thinking is that we could simply extend the members to add a byte for every four members. This should be relatively simple, though this forecloses the option of using MultiXact for some other purpose than locking tuples. This being purely theoretical, I don't have a problem with that. Note that the original keylocks patch I posted a couple of weeks ago has a caveat: if transaction A grabs share lock and transaction B grabs key lock, there's no way to know who owns which. I dismissed that problem as unimportant (and probably infrequent); the good thing about this new idea is that we wouldn't have that problem. Is there a case not involving manual SELECT ... FOR locktype that will depend on this for correctness? Even if not, there's a lot to like about this proposal. However, I think I may be missing the condition you had in mind when designing it. This would also help us find a solution to the problem that an aborted subtransaction that updates or excl-locks a tuple causes an earlier shared lock to be forgotten. We would deal with this by marking the Xmax with a new MultiXact that includes both the lock and the update. This means that this MultiXact would have to be WAL-logged. I would leave that for a later patch, but I think it's good that there's a way to fix it. Interesting. Growing pg_multixact/members by 6% seems reasonable for those two benefits. It also makes possible a manual FOR UPDATE over a KEY LOCK; that previously looked problematic due to the lack of a later tuple version to continue bearing the KEY LOCK. Consider the simple case of a tuple with a single KEY LOCK which we then proceed to UPDATE, not touching any key column. Will that old tuple version get a multixact bearing both the FOR UPDATE locker and the KEY LOCK locker, or will it carry just the FOR UPDATE locker with the new tuple witnessing the KEY LOCK? The latter seems preferable to avoid an increase in pg_multixact usage, but I haven't worked out whether it covers enough of the problem space. Thanks, nm -- Noah Mischhttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] augmenting MultiXacts to improve foreign keys
On Aug9, 2011, at 22:40 , Tom Lane wrote: Alvaro Herrera alvhe...@commandprompt.com writes: Excerpts from Jeff Davis's message of mar ago 09 14:41:14 -0400 2011: Right now, FKs aren't really very special, they are mostly just syntactic sugar (right?). This proposal would make FKs special internal mechanisms, and I don't see the benefit in doing so. Well, you can get the same behavior by adding the constraint triggers manually. But those triggers are written in C, so you could equally claim that they are special internal already. The SPI interface has some special entry points to allow them to work correctly (for example passing a snapshot for the checks to run with). Yeah, the crosscheck-snapshot logic already puts the lie to any idea that the RI triggers are equivalent to anything available at the SQL level. True, but I still considered that to be a quite unfortunate limitation, and one that I hope to one day remove. So I'd hate to see us adding more stumbling blocks along that road. Even today, you can do user-space FK-like constraint if you restrict yourself to either running all transaction in isolation level READ COMMITTED, or all transactions in isolation level SERIALIZABLE (Though I in the later case, you don't need locks anyway..) best regards, Florian Pflug -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] augmenting MultiXacts to improve foreign keys
On Aug10, 2011, at 08:45 , Noah Misch wrote: On Tue, Aug 09, 2011 at 09:41:00PM +0200, Florian Pflug wrote: Couldn't we simply give the user a way to specify, per column, whether or not that column is a KEY column? Creating a foreign key constraint could still implicitly mark all referenced columns as KEY columns, but columns would no longer be KEY columns simply because they're part of a UNIQUE constraint. Users would be free to add arbitrary columns to the set of KEY columns by doing ALTER TABLE ALTER COLUMN SET KEY. What would be the use case for manually expanding the set of key columns? You'd use that if you implemented a FK-like constraint (e.g. a FK on an array-values field). Without a way to manually expand the set of KEY columns, such a non-core FK constraint couldn't use the lighter locks. best regards, Florian Pflug -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reduced power consumption in autovacuum launcher process
On 10 August 2011 01:35, Tom Lane t...@sss.pgh.pa.us wrote: Actually, I'm nearly done with it already. Perhaps you could start thinking about the other polling loops. Fair enough. I'm slightly surprised that there doesn't need to be some bikeshedding about my idea to treat the PGPROC latch as the generic, per-process latch. -- Peter Geoghegan http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training and Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Enforcing that all WAL has been replayed after restoring from backup
On 09.08.2011 19:07, Tom Lane wrote: Heikki Linnakangasheikki.linnakan...@enterprisedb.com writes: On 09.08.2011 18:20, Alvaro Herrera wrote: How about making the new backup_label field optional? If absent, assume current behavior. That's how I actually did it in the patch. However, the problem wrt. requiring initdb is not the new field in backup_label, it's the new field in the control file. Yeah. I think it's too late to be fooling with pg_control for 9.1. Just fix it in HEAD. Done. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Enforcing that all WAL has been replayed after restoring from backup
On Tue, Aug 9, 2011 at 18:07, Tom Lane t...@sss.pgh.pa.us wrote: Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes: On 09.08.2011 18:20, Alvaro Herrera wrote: How about making the new backup_label field optional? If absent, assume current behavior. That's how I actually did it in the patch. However, the problem wrt. requiring initdb is not the new field in backup_label, it's the new field in the control file. Yeah. I think it's too late to be fooling with pg_control for 9.1. Just fix it in HEAD. Should we add a note to the documentation of pg_basebackup in 9.1 telling people to take care about the failure case? Or add a signal handler in the pg_basebackup client emitting a warning about it? -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] some missing internationalization in pg_basebackup
On Tue, Aug 9, 2011 at 13:38, Peter Eisentraut pete...@gmx.net wrote: I noticed that the progress reporting code in pg_basebackup does not allow for translation. This would normally be easy to fix, but this code has a number of tricky issues, including the INT64_FORMAT, possibly some plural concerns, and some space alignment issues that hidden in some of those hardcoded numbers. I'm just posting it here as an open item in case someone has some ideas in the meantime. Hm. i didn't consider translation at all when writing that. Looking at it from that perspective, I realize it's pretty darn hard to make it translatable. Maybe it can/should be rewritten not to try to do all those things in one step, thus making it easier somehow? I'm afraid I don't know enough about the translation system to know exactly what would go all the way there, though.. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: Fast GiST index build
Hi! Here is last verion of the patch. List of changes: 1) Neighbor relocation and prefetch were removed. They will be supplied as separate patches. 2) Final emptying now using standart lists of all buffers by levels. 3) Automatic switching again use simple comparison of index size and effective_cache_size. 4) Some renames. In particular GISTLoadedPartItem to GISTBufferingInsertStack. 5) Some comments were corrected and some were added. 6) pgindent 7) rebased with head Readme update and user documentation coming soon. -- With best regards, Alexander Korotkov. gist_fast_build-0.11.0.patch.gz Description: GNU Zip compressed data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Enforcing that all WAL has been replayed after restoring from backup
On 10.08.2011 12:29, Magnus Hagander wrote: On Tue, Aug 9, 2011 at 18:07, Tom Lanet...@sss.pgh.pa.us wrote: Heikki Linnakangasheikki.linnakan...@enterprisedb.com writes: On 09.08.2011 18:20, Alvaro Herrera wrote: How about making the new backup_label field optional? If absent, assume current behavior. That's how I actually did it in the patch. However, the problem wrt. requiring initdb is not the new field in backup_label, it's the new field in the control file. Yeah. I think it's too late to be fooling with pg_control for 9.1. Just fix it in HEAD. Should we add a note to the documentation of pg_basebackup in 9.1 telling people to take care about the failure case? Something like Note: if you abort the backup before it's finished, the backup won't be valid ? That seems pretty obvious to me, hardly worth documenting. Or add a signal handler in the pg_basebackup client emitting a warning about it? We don't have such a signal handler pg_dump either. I don't think we should add it. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Enforcing that all WAL has been replayed after restoring from backup
On Wed, Aug 10, 2011 at 12:44, Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote: On 10.08.2011 12:29, Magnus Hagander wrote: On Tue, Aug 9, 2011 at 18:07, Tom Lanet...@sss.pgh.pa.us wrote: Heikki Linnakangasheikki.linnakan...@enterprisedb.com writes: On 09.08.2011 18:20, Alvaro Herrera wrote: How about making the new backup_label field optional? If absent, assume current behavior. That's how I actually did it in the patch. However, the problem wrt. requiring initdb is not the new field in backup_label, it's the new field in the control file. Yeah. I think it's too late to be fooling with pg_control for 9.1. Just fix it in HEAD. Should we add a note to the documentation of pg_basebackup in 9.1 telling people to take care about the failure case? Something like Note: if you abort the backup before it's finished, the backup won't be valid ? That seems pretty obvious to me, hardly worth documenting. I meant something more along the line of that it looks ok, but may be corrupted. Or add a signal handler in the pg_basebackup client emitting a warning about it? We don't have such a signal handler pg_dump either. I don't think we should add it. Hmm. I guess an aborted pg_dump will also look ok but actually be corrupt (or incomplete). Good point. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSL-mode error reporting in libpq
On Sun, Jul 24, 2011 at 20:48, Tom Lane t...@sss.pgh.pa.us wrote: In testing the fix for the SSL problem that Martin Pihlak reported, I realized that libpq doesn't really cope very well with errors reported by OpenSSL. In the case at hand, SSL_write returns an SSL_ERROR_SSL code, which pqsecure_write quite reasonably handles by putting SSL error: bad write retry into conn-errorMessage. However, it then sets errno = ECONNRESET, which causes its caller pqSendSome() to overwrite that potentially-useful message with an outright lie: server closed the connection unexpectedly. I think what we ought to do is adjust the code so that in SSL mode, pqsecure_write is responsible for constructing all error messages and pqSendSome should just leave conn-errorMessage alone. We could perhaps go a bit further and make pqsecure_write responsible for the error message in non-SSL mode too, but it looks to me like pqSendSome has to have a switch on the errno anyway to decide whether to keep trying or not, so moving that responsibility would just lead to duplicative coding. Any objections? I haven't looked at the actual code but - does it make sense to have them responsible for different parts and append them? If not, then no objections ;) -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] longstanding mingw warning
On Tue, Jul 26, 2011 at 15:20, Andrew Dunstan and...@dunslane.net wrote: Why do we get this warning on Mingw?: x86_64-w64-mingw32-gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wformat-security -fno-strict-aliasing -fwrapv -g -I../../../../src/include -I/home/pgrunner/bf/root/HEAD/pgsql.3896/../pgsql/src/include -I../pgsql/src/include/port/win32 -DEXEC_BACKEND -I/c/prog/mingwdep/include -I/home/pgrunner/bf/root/HEAD/pgsql.3896/../pgsql/src/include/port/win32 -DBUILDING_DLL -c -o mingwcompat.o /home/pgrunner/bf/root/HEAD/pgsql.3896/../pgsql/src/backend/port/win32/mingwcompat.c c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.3896/../pgsql/src/backend/port/win32/mingwcompat.c:60:1: warning: 'RegisterWaitForSingleObject' redeclared without dllimport attribute: previous dllimport ignored Can we get rid of it? I don't recall this warning specifically - I wonder if it's specific to certain version(s) of mingw? It's in mingwcompat.c simply because the mingw API headers were broken. That warning sounds to me like you suddenly have RegisterWaitForSingleObject present in the system headers - can you check that it is? If it is in the system headers, we need some way to check when it appeared, and then add the function only if it isn't there. Either by autoconf, or if we can make a simple hardcoded ifdef (since the file is only ever compiled on mingw). -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Enforcing that all WAL has been replayed after restoring from backup
On Wed, Aug 10, 2011 at 6:53 AM, Magnus Hagander mag...@hagander.net wrote: On Wed, Aug 10, 2011 at 12:44, Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote: On 10.08.2011 12:29, Magnus Hagander wrote: On Tue, Aug 9, 2011 at 18:07, Tom Lanet...@sss.pgh.pa.us wrote: Heikki Linnakangasheikki.linnakan...@enterprisedb.com writes: On 09.08.2011 18:20, Alvaro Herrera wrote: How about making the new backup_label field optional? If absent, assume current behavior. That's how I actually did it in the patch. However, the problem wrt. requiring initdb is not the new field in backup_label, it's the new field in the control file. Yeah. I think it's too late to be fooling with pg_control for 9.1. Just fix it in HEAD. Should we add a note to the documentation of pg_basebackup in 9.1 telling people to take care about the failure case? Something like Note: if you abort the backup before it's finished, the backup won't be valid ? That seems pretty obvious to me, hardly worth documenting. I meant something more along the line of that it looks ok, but may be corrupted. Yeah. I'm frankly pretty nervous about shipping 9.1 with this problem, but note that I don't have a better idea. I'd favor making pg_basebackup emit a warning or maybe even remove the backup if it's aborted midway through. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] index sizes: single table vs partitioned
On Mon, Aug 8, 2011 at 1:38 PM, Andrew Hammond andrew.george.hamm...@gmail.com wrote: For a large table, should there be a difference in index sizes between a single table representation and representation based on multiple partitions with identical indexes? This isn't really the right mailing list for this question; this is a mailing list for the development team. I would suggest trying this on -general. I wouldn't expect there to be a big difference, but your email is light on the sort of details that might enable someone to speculate on what is going on in your particular case. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Enforcing that all WAL has been replayed after restoring from backup
On Wed, Aug 10, 2011 at 1:19 PM, Robert Haas robertmh...@gmail.com wrote: On Wed, Aug 10, 2011 at 6:53 AM, Magnus Hagander mag...@hagander.net wrote: On Wed, Aug 10, 2011 at 12:44, Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote: On 10.08.2011 12:29, Magnus Hagander wrote: On Tue, Aug 9, 2011 at 18:07, Tom Lanet...@sss.pgh.pa.us wrote: Heikki Linnakangasheikki.linnakan...@enterprisedb.com writes: On 09.08.2011 18:20, Alvaro Herrera wrote: How about making the new backup_label field optional? If absent, assume current behavior. That's how I actually did it in the patch. However, the problem wrt. requiring initdb is not the new field in backup_label, it's the new field in the control file. Yeah. I think it's too late to be fooling with pg_control for 9.1. Just fix it in HEAD. Should we add a note to the documentation of pg_basebackup in 9.1 telling people to take care about the failure case? Something like Note: if you abort the backup before it's finished, the backup won't be valid ? That seems pretty obvious to me, hardly worth documenting. I meant something more along the line of that it looks ok, but may be corrupted. Yeah. I'm frankly pretty nervous about shipping 9.1 with this problem, but note that I don't have a better idea. I'd favor making pg_basebackup emit a warning or maybe even remove the backup if it's aborted midway through. I don't understand why we need to change pg_control for this? Why can't we just add a line to backup_label as the first action of pg_basebackup and then updated it the last action to show the backup set is complete? That would be safe for 9.1 -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Review of VS 2010 support patches
On Sun, Jul 31, 2011 at 03:25, Andrew Dunstan and...@dunslane.net wrote: On 07/06/2011 08:26 PM, Brar Piening wrote: Original Message Subject: Re: [HACKERS] Review of VS 2010 support patches From: Andrew Dunstan and...@dunslane.net To: Brar Piening b...@gmx.de Date: 06.07.2011 22:58 I'll remove my versions from the patch (v9 probably) if those files get commited. I'm just doing some final testing and preparing to commit the new pgflex and pgbison. The attached patch includes documentation changes and excludes my versions of pgbison.pl and pgflex.pl which have been replaced by Andrews' versions that are already commited. As before perltidy_before.patch has to be applied first and VS2010v9.patch second. I just started looking at this a bit. One small question: why are we using use base qw(foo); instead of use parent qw(foo); which I understand is preferred these days? I am no perl expert, but I see we are using this already today - in code written by you in one case ;) I'd assume it was just following the same standard... If the other way is the way to do it today, I see no reason not to change it to use that. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Review of VS 2010 support patches
On Wed, Aug 10, 2011 at 9:03 AM, Magnus Hagander mag...@hagander.net wrote: I am no perl expert, but I see we are using this already today - in code written by you in one case ;) I'd assume it was just following the same standard... If the other way is the way to do it today, I see no reason not to change it to use that. This is the first I'm hearing of use parent - has that been around long enough that we needn't worry about breaking old Perl versions? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Review of VS 2010 support patches
On 08/10/2011 09:03 AM, Magnus Hagander wrote: On Sun, Jul 31, 2011 at 03:25, Andrew Dunstanand...@dunslane.net wrote: On 07/06/2011 08:26 PM, Brar Piening wrote: Original Message Subject: Re: [HACKERS] Review of VS 2010 support patches From: Andrew Dunstanand...@dunslane.net To: Brar Pieningb...@gmx.de Date: 06.07.2011 22:58 I'll remove my versions from the patch (v9 probably) if those files get commited. I'm just doing some final testing and preparing to commit the new pgflex and pgbison. The attached patch includes documentation changes and excludes my versions of pgbison.pl and pgflex.pl which have been replaced by Andrews' versions that are already commited. As before perltidy_before.patch has to be applied first and VS2010v9.patch second. I just started looking at this a bit. One small question: why are we using use base qw(foo); instead of use parent qw(foo); which I understand is preferred these days? I am no perl expert, but I see we are using this already today - in code written by you in one case ;) I'd assume it was just following the same standard... If the other way is the way to do it today, I see no reason not to change it to use that. Umm, where are we using it today? [andrew@emma pg_head]$ grep -r -P 'use\s+base' . ./doc/src/sgml/release-old.sgml: what lexer you use based on the platform you use. ./doc/src/sgml/charset.sgml: encoding to use based on the specified or default locale. ./src/backend/commands/aggregatecmds.c: * Old style: use basetype parameter. This supports aggregates of ./autom4te.cache/output.0:# Required to use basename. ./autom4te.cache/output.0:# Required to use basename. ./configure:# Required to use basename. ./configure:# Required to use basename. [andrew@emma pg_head]$ cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Review of VS 2010 support patches
On 08/10/2011 09:21 AM, Robert Haas wrote: On Wed, Aug 10, 2011 at 9:03 AM, Magnus Hagandermag...@hagander.net wrote: I am no perl expert, but I see we are using this already today - in code written by you in one case ;) I'd assume it was just following the same standard... If the other way is the way to do it today, I see no reason not to change it to use that. This is the first I'm hearing of use parent - has that been around long enough that we needn't worry about breaking old Perl versions? Good question. Maybe not. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Review of VS 2010 support patches
On Wed, Aug 10, 2011 at 15:25, Andrew Dunstan and...@dunslane.net wrote: On 08/10/2011 09:03 AM, Magnus Hagander wrote: On Sun, Jul 31, 2011 at 03:25, Andrew Dunstanand...@dunslane.net wrote: On 07/06/2011 08:26 PM, Brar Piening wrote: Original Message Subject: Re: [HACKERS] Review of VS 2010 support patches From: Andrew Dunstanand...@dunslane.net To: Brar Pieningb...@gmx.de Date: 06.07.2011 22:58 I'll remove my versions from the patch (v9 probably) if those files get commited. I'm just doing some final testing and preparing to commit the new pgflex and pgbison. The attached patch includes documentation changes and excludes my versions of pgbison.pl and pgflex.pl which have been replaced by Andrews' versions that are already commited. As before perltidy_before.patch has to be applied first and VS2010v9.patch second. I just started looking at this a bit. One small question: why are we using use base qw(foo); instead of use parent qw(foo); which I understand is preferred these days? I am no perl expert, but I see we are using this already today - in code written by you in one case ;) I'd assume it was just following the same standard... If the other way is the way to do it today, I see no reason not to change it to use that. Umm, where are we using it today? [andrew@emma pg_head]$ grep -r -P 'use\s+base' . ./doc/src/sgml/release-old.sgml: what lexer you use based on the platform you use. ./doc/src/sgml/charset.sgml: encoding to use based on the specified or default locale. ./src/backend/commands/aggregatecmds.c: * Old style: use basetype parameter. This supports aggregates of ./autom4te.cache/output.0:# Required to use basename. ./autom4te.cache/output.0:# Required to use basename. ./configure:# Required to use basename. ./configure:# Required to use basename. [andrew@emma pg_head]$ Meh. I am clearly not back in the game since my vacation. I didn't realize base was a keyword... Ignore and move on, nothing to see here. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reduced power consumption in autovacuum launcher process
Peter Geoghegan pe...@2ndquadrant.com writes: On 10 August 2011 01:35, Tom Lane t...@sss.pgh.pa.us wrote: Actually, I'm nearly done with it already. Perhaps you could start thinking about the other polling loops. Fair enough. I'm slightly surprised that there doesn't need to be some bikeshedding about my idea to treat the PGPROC latch as the generic, per-process latch. No, I don't find that unreasonable, especially not since Simon had made that the de facto situation anyhow by having it be initialized for all backends in proc.c and set unconditionally by some of the standard signal handlers. I am working on renaming it to procLatch (I find waitLatch a bit too generic) and fixing a bunch of pre-existing bugs that I now see in that code, like failure to save/restore errno in signal handlers that used to only set a flag but now also call SetLatch. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] longstanding mingw warning
On 08/10/2011 08:08 AM, Magnus Hagander wrote: On Tue, Jul 26, 2011 at 15:20, Andrew Dunstanand...@dunslane.net wrote: Why do we get this warning on Mingw?: x86_64-w64-mingw32-gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wformat-security -fno-strict-aliasing -fwrapv -g -I../../../../src/include -I/home/pgrunner/bf/root/HEAD/pgsql.3896/../pgsql/src/include -I../pgsql/src/include/port/win32 -DEXEC_BACKEND -I/c/prog/mingwdep/include -I/home/pgrunner/bf/root/HEAD/pgsql.3896/../pgsql/src/include/port/win32 -DBUILDING_DLL -c -o mingwcompat.o /home/pgrunner/bf/root/HEAD/pgsql.3896/../pgsql/src/backend/port/win32/mingwcompat.c c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.3896/../pgsql/src/backend/port/win32/mingwcompat.c:60:1: warning: 'RegisterWaitForSingleObject' redeclared without dllimport attribute: previous dllimport ignored Can we get rid of it? I don't recall this warning specifically - I wonder if it's specific to certain version(s) of mingw? It's in mingwcompat.c simply because the mingw API headers were broken. That warning sounds to me like you suddenly have RegisterWaitForSingleObject present in the system headers - can you check that it is? If it is in the system headers, we need some way to check when it appeared, and then add the function only if it isn't there. Either by autoconf, or if we can make a simple hardcoded ifdef (since the file is only ever compiled on mingw). In some versions of the headers it's declared unconditionally, in others it's declared if __WIN32_WINNT 0x0500 which we are these days. But it looks like we're trying to override the builtin, not just supply a declaration. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Enforcing that all WAL has been replayed after restoring from backup
On 10.08.2011 15:34, Simon Riggs wrote: On Wed, Aug 10, 2011 at 1:19 PM, Robert Haasrobertmh...@gmail.com wrote: On Wed, Aug 10, 2011 at 6:53 AM, Magnus Hagandermag...@hagander.net wrote: On Wed, Aug 10, 2011 at 12:44, Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote: On 10.08.2011 12:29, Magnus Hagander wrote: On Tue, Aug 9, 2011 at 18:07, Tom Lanet...@sss.pgh.pa.uswrote: Heikki Linnakangasheikki.linnakan...@enterprisedb.comwrites: On 09.08.2011 18:20, Alvaro Herrera wrote: How about making the new backup_label field optional? If absent, assume current behavior. That's how I actually did it in the patch. However, the problem wrt. requiring initdb is not the new field in backup_label, it's the new field in the control file. Yeah. I think it's too late to be fooling with pg_control for 9.1. Just fix it in HEAD. Should we add a note to the documentation of pg_basebackup in 9.1 telling people to take care about the failure case? Something like Note: if you abort the backup before it's finished, the backup won't be valid ? That seems pretty obvious to me, hardly worth documenting. I meant something more along the line of that it looks ok, but may be corrupted. Yeah. I'm frankly pretty nervous about shipping 9.1 with this problem, but note that I don't have a better idea. I'd favor making pg_basebackup emit a warning or maybe even remove the backup if it's aborted midway through. I don't understand why we need to change pg_control for this? Why can't we just add a line to backup_label as the first action of pg_basebackup and then updated it the last action to show the backup set is complete? Hmm, that's not possible for the 'tar' output, but would work for 'dir' output. Another similar idea would be to withhold the control file in memory until the end of backup, and append it to the output as last. The backup can't be restored until the control file is written out. That won't protect from more complicated scenarios, like if you take the backup without the -x flag, and copy some but not all of the required WAL files manually to the pg_xlog directory. But it'd be much better than nothing for 9.1. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] gcc 4.6 warnings in HEAD?
Hi, I'm seeing a bunch of warnings I don't remember seeing before in the master branch: /pgsql/source/HEAD/src/backend/executor/execQual.c: In function 'GetAttributeByNum': /pgsql/source/HEAD/src/backend/executor/execQual.c:1104:11: warning: the comparison will always evaluate as 'true' for the address of 'tmptup' will never be NULL [-Waddress] /pgsql/source/HEAD/src/backend/executor/execQual.c: In function 'GetAttributeByName': /pgsql/source/HEAD/src/backend/executor/execQual.c:1165:11: warning: the comparison will always evaluate as 'true' for the address of 'tmptup' will never be NULL [-Waddress] /pgsql/source/HEAD/src/backend/executor/execQual.c: In function 'ExecEvalFieldSelect': /pgsql/source/HEAD/src/backend/executor/execQual.c:3914:11: warning: the comparison will always evaluate as 'true' for the address of 'tmptup' will never be NULL [-Waddress] /pgsql/source/HEAD/src/backend/utils/sort/tuplesort.c: In function 'comparetup_heap': /pgsql/source/HEAD/src/backend/utils/sort/tuplesort.c:2755:12: warning: the comparison will always evaluate as 'true' for the address of 'ltup' will never be NULL [-Waddress] /pgsql/source/HEAD/src/backend/utils/sort/tuplesort.c:2756:12: warning: the comparison will always evaluate as 'true' for the address of 'rtup' will never be NULL [-Waddress] /pgsql/source/HEAD/src/backend/utils/sort/tuplesort.c: In function 'copytup_heap': /pgsql/source/HEAD/src/backend/utils/sort/tuplesort.c:2787:17: warning: the comparison will always evaluate as 'true' for the address of 'htup' will never be NULL [-Waddress] /pgsql/source/HEAD/src/backend/utils/sort/tuplesort.c: In function 'readtup_heap': /pgsql/source/HEAD/src/backend/utils/sort/tuplesort.c:2839:17: warning: the comparison will always evaluate as 'true' for the address of 'htup' will never be NULL [-Waddress] These seem to have to do with calling the heap_getattr() macro with a tuple in the stack (not a pointer). I have a vague memory of seeing this problem some time ago. /pgsql/source/HEAD/src/interfaces/libpq/fe-connect.c: In function 'PQconndefaults': /pgsql/source/HEAD/src/interfaces/libpq/fe-connect.c:832:6: warning: the comparison will always evaluate as 'false' for the address of 'errorBuf' will never be NULL [-Waddress] /pgsql/source/HEAD/src/interfaces/libpq/fe-connect.c: In function 'PQconninfoParse': /pgsql/source/HEAD/src/interfaces/libpq/fe-connect.c:3962:6: warning: the comparison will always evaluate as 'false' for the address of 'errorBuf' will never be NULL [-Waddress] In file included from /pgsql/source/HEAD/src/bin/psql/mainloop.c:425:0: /pgsql/source/HEAD/src/bin/psql/psqlscan.l: In function 'psql_scan_slash_option': /pgsql/source/HEAD/src/bin/psql/psqlscan.l:1532:9: warning: the comparison will always evaluate as 'false' for the address of 'output' will never be NULL [-Waddress] I don't understand these three -- they look like a compiler bug. This is about PQExpBufferBroken being called on a buffer that cannot be null ('cause it's in the stack) but obviously it could still have zero length. 1$ gcc --version gcc (Debian 4.6.1-4) 4.6.1 Copyright (C) 2011 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. -- Álvaro Herrera alvhe...@alvh.no-ip.org -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reduced power consumption in autovacuum launcher process
Peter Geoghegan pe...@2ndquadrant.com writes: Attached is revision of this patch that now treats the latch in PGPROC, waitLatch, as the generic process latch, rather than just using it for sync rep; It is initialised appropriately as a shared latch generically, within InitProcGlobal(), and ownership is subsequently set within InitProcess(). We were doing so before, though only for the benefit of sync rep. Applied with some corrections, notably: * Signal handlers mustn't change errno and mustn't assume MyProc is set, since they might get invoked before it's set or after it's cleared. Calling SetLatch outside the save/restore of errno is right out, of course, but I also found that quite a lot of handlers had previously been hacked to call SetLatch or latch_sigusr1_handler without any save/restore at all. I'm not sure if any of those were your mistake; I think a lot of them were pre-existing bugs in the sync rep patch. * avlauncher loop was missing a ResetLatch call. I also added a comment explaining the difference from the normal pattern for using WaitLatch. * I didn't add a SetLatch call in procsignal_sigusr1_handler. I'm unconvinced that it's necessary, and if it is we probably need to rethink using SIGUSR1 internally in unix_latch.c. It does not make sense to set the procLatch when we get a signal that's directed towards releasing some other latch. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https
On 08/09/2011 04:32 PM, Tom Lane wrote: Andrew Dunstanand...@dunslane.net writes: On 08/09/2011 12:22 PM, Tom Lane wrote: No. As I pointed out upthread, the instant somebody changes the SIGALRM handler to a non-Postgres-aware one, you are already at risk of failure. Setting it back later is just locking the barn door after the horses left. Institutionalizing such a non-fix globally is even worse. So what's your suggestion? I know what you said you'd like, but it doesn't appear at all practical to me. [ shrug... ] Installing a perl module that mucks with the signal handlers is in the don't do that category. A kluge such as you suggest will not get it out of that category; all it will do is add useless overhead for people who are following the rules. Well, knowing what a given module might do isn't always easy (see below). I don't much like saying to people I told you so, especially when following the advice isn't necessarily straightforward. After some experimentation, I found that, at least on my system, if LWP uses Crypt::SSLeay for https requests then it sets an alarm handler, but if instead it uses IO::Socket::SSL an alarm handler is not set. So the answer to the OP's original problem is probably make sure you have IO::Socket::SSL installed and that Crypt::SSLeay is not installed. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Policy on pulling in code from other projects?
On Aug 9, 2011, at 6:00 PM, Peter van Hardenberg wrote: In conclusion, this is a serious operational concern for me and my team and I will be personally dealing with fires caused by this for years to come regardless of the outcome of this thread. Do you have an interest in funding development of the necessary URI-parsing code to get the feature done for 9.2? Best, David -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https
On Aug 10, 2011, at 9:44 AM, Andrew Dunstan wrote: After some experimentation, I found that, at least on my system, if LWP uses Crypt::SSLeay for https requests then it sets an alarm handler, but if instead it uses IO::Socket::SSL an alarm handler is not set. So the answer to the OP's original problem is probably make sure you have IO::Socket::SSL installed and that Crypt::SSLeay is not installed. I think I'd also complain via bug-crypt-ssl...@rt.cpan.org that a library ought not to set signal handlers. Best, David -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] plperl crash with Debian 6 (64 bit), pl/perlu, libwww and https
Andrew Dunstan and...@dunslane.net writes: On 08/09/2011 04:32 PM, Tom Lane wrote: [ shrug... ] Installing a perl module that mucks with the signal handlers is in the don't do that category. A kluge such as you suggest will not get it out of that category; all it will do is add useless overhead for people who are following the rules. Well, knowing what a given module might do isn't always easy (see below). I don't much like saying to people I told you so, especially when following the advice isn't necessarily straightforward. I'm not thrilled with it either, but since we have no proposed patch that would actually make it *safe* for perl modules to muck with the signal handlers, I see no other alternative. A patch that simply makes it a shade less unsafe isn't really an improvement, especially when it has other disadvantages. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Enforcing that all WAL has been replayed after restoring from backup
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes: Hmm, that's not possible for the 'tar' output, but would work for 'dir' output. Another similar idea would be to withhold the control file in memory until the end of backup, and append it to the output as last. The backup can't be restored until the control file is written out. That won't protect from more complicated scenarios, like if you take the backup without the -x flag, and copy some but not all of the required WAL files manually to the pg_xlog directory. But it'd be much better than nothing for 9.1. Maybe we're overcomplicating this. What about changing pg_basebackup to print a message when the backup is completely sent/received? People would get used to that quickly, and would know to be suspicious if they didn't see it. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Enforcing that all WAL has been replayed after restoring from backup
On Wed, Aug 10, 2011 at 1:45 PM, Tom Lane t...@sss.pgh.pa.us wrote: Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes: Hmm, that's not possible for the 'tar' output, but would work for 'dir' output. Another similar idea would be to withhold the control file in memory until the end of backup, and append it to the output as last. The backup can't be restored until the control file is written out. That won't protect from more complicated scenarios, like if you take the backup without the -x flag, and copy some but not all of the required WAL files manually to the pg_xlog directory. But it'd be much better than nothing for 9.1. Maybe we're overcomplicating this. What about changing pg_basebackup to print a message when the backup is completely sent/received? People would get used to that quickly, and would know to be suspicious if they didn't see it. Yeah, but would they be sufficiently suspicious to think oh, my backup is hopeless corrupted even if it seems to work? I think a clearer warning is needed, at the very least, and if there's a way to prevent it altogether at least in straightforward cases, that would be even better. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] gcc 4.6 warnings in HEAD?
On ons, 2011-08-10 at 12:37 -0400, Alvaro Herrera wrote: I'm seeing a bunch of warnings I don't remember seeing before in the master branch: /pgsql/source/HEAD/src/backend/executor/execQual.c: In function 'GetAttributeByNum': /pgsql/source/HEAD/src/backend/executor/execQual.c:1104:11: warning: the comparison will always evaluate as 'true' for the address of 'tmptup' will never be NULL [-Waddress] Yes, these are new with gcc 4.6. I have filed a bug about them: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48778 All the other new warnings introduced by gcc 4.6 should be cleaned up by now in the 9.2 branch. (There are more warnings in 9.1 whose fixes I did not backpatch.) I am personally using make COPT=-Werror -Wno-error=address to build PostgreSQL for the time being. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] SHOW command always returns text field
I was slightly surprised the other day that the SHOW command always returns a field of type text even if the underlying parameter has one of the other types (int, real, etc.). Is this intentional? Would it be worth refining? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reduced power consumption in autovacuum launcher process
On Wed, Aug 10, 2011 at 3:08 PM, Tom Lane t...@sss.pgh.pa.us wrote: Peter Geoghegan pe...@2ndquadrant.com writes: On 10 August 2011 01:35, Tom Lane t...@sss.pgh.pa.us wrote: Actually, I'm nearly done with it already. Perhaps you could start thinking about the other polling loops. Fair enough. I'm slightly surprised that there doesn't need to be some bikeshedding about my idea to treat the PGPROC latch as the generic, per-process latch. No, I don't find that unreasonable, especially not since Simon had made that the de facto situation anyhow by having it be initialized for all backends in proc.c and set unconditionally by some of the standard signal handlers. I am working on renaming it to procLatch (I find waitLatch a bit too generic) and That was the direction I wanted to go in anyway, as you guessed. fixing a bunch of pre-existing bugs that I now see in that code, like failure to save/restore errno in signal handlers that used to only set a flag but now also call SetLatch. Thanks for looking at the code; sounds like we nipped a few would-have-been-bugs there. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] sha1, sha2 functions into core?
I would like to see whether there is support for adding sha1 and sha2 functions into the core. These are obviously well-known and widely used functions, but currently the only way to get them is either through pgcrypto or one of the PLs. We could say that's OK, but then we do support md5 in core, which then encourages people to use that, when they really shouldn't use that for new applications. Another weirdness is that md5() doesn't return bytea but instead the result hex-encoded in a string, which makes it weird to use in some cases. One thing that might be reasonable would be to move the digest() functions digest(data text, type text) returns bytea digest(data bytea, type text) returns bytea from pgcrypto into core, so that pgcrypto is mostly restricted to encryption, and can be kept at arm's length for those who need to do that. (Side note: Would the extension mechanism be able to easily cope with a move like that?) -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SHOW command always returns text field
Peter Eisentraut pete...@gmx.net writes: I was slightly surprised the other day that the SHOW command always returns a field of type text even if the underlying parameter has one of the other types (int, real, etc.). Is this intentional? Would it be worth refining? I'm disinclined to mess with it. One thing that couldn't easily be cleaned up is the textual output for nominally-numeric parameters that have units. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Enforcing that all WAL has been replayed after restoring from backup
On Wed, Aug 10, 2011 at 19:45, Tom Lane t...@sss.pgh.pa.us wrote: Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes: Hmm, that's not possible for the 'tar' output, but would work for 'dir' output. Another similar idea would be to withhold the control file in memory until the end of backup, and append it to the output as last. The backup can't be restored until the control file is written out. That won't protect from more complicated scenarios, like if you take the backup without the -x flag, and copy some but not all of the required WAL files manually to the pg_xlog directory. But it'd be much better than nothing for 9.1. Maybe we're overcomplicating this. What about changing pg_basebackup to print a message when the backup is completely sent/received? People would get used to that quickly, and would know to be suspicious if they didn't see it. That would suck for scripts, and have people redirect the output to /dev/null instead, wouldn't it? And it violates the unix expectation that is that a successful command will not write anything to it's output... -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] sha1, sha2 functions into core?
Peter Eisentraut pete...@gmx.net writes: I would like to see whether there is support for adding sha1 and sha2 functions into the core. I can't get excited about that, but could put up with it as long as there wasn't scope creep ... One thing that might be reasonable would be to move the digest() functions digest(data text, type text) returns bytea digest(data bytea, type text) returns bytea from pgcrypto into core, ... which this approach would create, because digest() isn't restricted to just those algorithms. I think it'd be better to just invent two new functions, which also avoids issues for applications that currently expect the digest functions to be installed in pgcrypto's schema. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] longstanding mingw warning
On Wed, Aug 10, 2011 at 17:25, Andrew Dunstan and...@dunslane.net wrote: On 08/10/2011 08:08 AM, Magnus Hagander wrote: On Tue, Jul 26, 2011 at 15:20, Andrew Dunstanand...@dunslane.net wrote: Why do we get this warning on Mingw?: x86_64-w64-mingw32-gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wformat-security -fno-strict-aliasing -fwrapv -g -I../../../../src/include -I/home/pgrunner/bf/root/HEAD/pgsql.3896/../pgsql/src/include -I../pgsql/src/include/port/win32 -DEXEC_BACKEND -I/c/prog/mingwdep/include -I/home/pgrunner/bf/root/HEAD/pgsql.3896/../pgsql/src/include/port/win32 -DBUILDING_DLL -c -o mingwcompat.o /home/pgrunner/bf/root/HEAD/pgsql.3896/../pgsql/src/backend/port/win32/mingwcompat.c c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.3896/../pgsql/src/backend/port/win32/mingwcompat.c:60:1: warning: 'RegisterWaitForSingleObject' redeclared without dllimport attribute: previous dllimport ignored Can we get rid of it? I don't recall this warning specifically - I wonder if it's specific to certain version(s) of mingw? It's in mingwcompat.c simply because the mingw API headers were broken. That warning sounds to me like you suddenly have RegisterWaitForSingleObject present in the system headers - can you check that it is? If it is in the system headers, we need some way to check when it appeared, and then add the function only if it isn't there. Either by autoconf, or if we can make a simple hardcoded ifdef (since the file is only ever compiled on mingw). In some versions of the headers it's declared unconditionally, in others it's declared if __WIN32_WINNT 0x0500 which we are these days. But it looks like we're trying to override the builtin, not just supply a declaration. The original reason it was put there was that it was missing from *both* the header *and* the library. Thus just providing the declaration didn't solve the problem. If mingw has now added it to both headers and library, then yes, it will currently try to override the builtin - but only because the builtin didn't exist at the time. So we need to detect whether the builtin exists in the installed version of mingw, and just not include our version *at all* when it does exist. This can be done either through autoconf or if there's a mingw header somewhere that changed when they did add it (like a version number?) -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] sha1, sha2 functions into core?
On 08/10/2011 02:06 PM, Peter Eisentraut wrote: I would like to see whether there is support for adding sha1 and sha2 functions into the core. These are obviously well-known and widely used functions, but currently the only way to get them is either through pgcrypto or one of the PLs. We could say that's OK, but then we do support md5 in core, which then encourages people to use that, when they really shouldn't use that for new applications. Another weirdness is that md5() doesn't return bytea but instead the result hex-encoded in a string, which makes it weird to use in some cases. One thing that might be reasonable would be to move the digest() functions digest(data text, type text) returns bytea digest(data bytea, type text) returns bytea from pgcrypto into core, so that pgcrypto is mostly restricted to encryption, and can be kept at arm's length for those who need to do that. (Side note: Would the extension mechanism be able to easily cope with a move like that?) It's come up before: http://archives.postgresql.org/pgsql-hackers/2009-09/msg01293.php +1 for returning bytea though. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] sha1, sha2 functions into core?
On Wed, Aug 10, 2011 at 2:24 PM, Andrew Dunstan and...@dunslane.net wrote: It's come up before: http://archives.postgresql.org/pgsql-hackers/2009-09/msg01293.php I was about to wonder out loud if we might be trying to hit a moving target -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SHOW command always returns text field
Excerpts from Tom Lane's message of mié ago 10 14:12:49 -0400 2011: Peter Eisentraut pete...@gmx.net writes: I was slightly surprised the other day that the SHOW command always returns a field of type text even if the underlying parameter has one of the other types (int, real, etc.). Is this intentional? Would it be worth refining? I'm disinclined to mess with it. One thing that couldn't easily be cleaned up is the textual output for nominally-numeric parameters that have units. How about having it return a record? We could have the units in a separate column ... and perhaps show the raw value too (I have wished to see the raw value plenty of times). Backwards compatibility is a serious problem though :-) -- Álvaro Herrera alvhe...@commandprompt.com The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] sha1, sha2 functions into core?
On Wed, Aug 10, 2011 at 7:06 PM, Peter Eisentraut pete...@gmx.net wrote: I would like to see whether there is support for adding sha1 and sha2 functions into the core. These are obviously well-known and widely used functions, but currently the only way to get them is either through pgcrypto or one of the PLs. We could say that's OK, but then we do support md5 in core, which then encourages people to use that, when they really shouldn't use that for new applications. Slightly different, but related - I've seen complaints that we only use md5 for password storage/transmission, which is apparently not acceptable under some government security standards. In the most recent case, they wanted to be able to use sha256 for password storage (transmission isn't really an issue where SSL can be used of course). If we're ready to move more hashing functions into core, then it seems reasonable to add more options for password storage to help those who need to meet mandated standards. -- Dave Page Blog: http://pgsnake.blogspot.com Twitter: @pgsnake EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] sha1, sha2 functions into core?
On ons, 2011-08-10 at 19:29 +0100, Dave Page wrote: On Wed, Aug 10, 2011 at 7:06 PM, Peter Eisentraut pete...@gmx.net wrote: I would like to see whether there is support for adding sha1 and sha2 functions into the core. These are obviously well-known and widely used functions, but currently the only way to get them is either through pgcrypto or one of the PLs. We could say that's OK, but then we do support md5 in core, which then encourages people to use that, when they really shouldn't use that for new applications. Slightly different, but related - I've seen complaints that we only use md5 for password storage/transmission, which is apparently not acceptable under some government security standards. In the most recent case, they wanted to be able to use sha256 for password storage (transmission isn't really an issue where SSL can be used of course). Yeah, that's one of those things. These days, using md5 for anything raises red flags, so it would be better to slowly move some alternatives into place. If we're ready to move more hashing functions into core, then it seems reasonable to add more options for password storage to help those who need to meet mandated standards. Yes, that would be good. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] pgstat wait timeout warnings
We occasionally see $SUBJECT in the buildfarm, and I've also recently had reports of them from Red Hat customers. The obvious theory is that these reflect high load preventing the stats collector from responding, but it would really take pretty crushing load to make that happen if there were not anything funny going on. It struck me just now while reviewing the latch code that pg_usleep could sleep for less than the expected time if a signal happened, and if that happened repeatedly for some reason, perhaps the loop could complete in much less than the nominal time. I'm not sure I believe that idea either, but anyway I'm feeling motivated to try to gather more data. Does anyone have a problem with sticking a lot of debugging printout into backend_read_statsfile() in HEAD only? I'm envisioning it starting to dump assorted information including elapsed time, errno values, etc once the loop counter is more than halfway to expiration, which is already a situation that we shouldn't see under normal conditions. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] sha1, sha2 functions into core?
On ons, 2011-08-10 at 14:26 -0400, Robert Haas wrote: On Wed, Aug 10, 2011 at 2:24 PM, Andrew Dunstan and...@dunslane.net wrote: It's come up before: http://archives.postgresql.org/pgsql-hackers/2009-09/msg01293.php I was about to wonder out loud if we might be trying to hit a moving target I think we are dealing with a lot more moving targets than adding a new version of SHA every 12 to 15 years. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SHOW command always returns text field
Alvaro Herrera alvhe...@commandprompt.com writes: Excerpts from Tom Lane's message of mié ago 10 14:12:49 -0400 2011: Peter Eisentraut pete...@gmx.net writes: I was slightly surprised the other day that the SHOW command always returns a field of type text even if the underlying parameter has one of the other types (int, real, etc.). Is this intentional? Would it be worth refining? I'm disinclined to mess with it. One thing that couldn't easily be cleaned up is the textual output for nominally-numeric parameters that have units. How about having it return a record? I think that's already covered by the pg_settings view. The question is whether plain SHOW ought to change. I don't see the point. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pgstat wait timeout warnings
On 10.08.2011 21:45, Tom Lane wrote: We occasionally see $SUBJECT in the buildfarm, and I've also recently had reports of them from Red Hat customers. The obvious theory is that these reflect high load preventing the stats collector from responding, but it would really take pretty crushing load to make that happen if there were not anything funny going on. It struck me just now while reviewing the latch code that pg_usleep could sleep for less than the expected time if a signal happened, and if that happened repeatedly for some reason, perhaps the loop could complete in much less than the nominal time. I'm not sure I believe that idea either, but anyway I'm feeling motivated to try to gather more data. I've also seen this on my laptop occasionally. The most recent case I remember was when I COPYed a lot of data, so that the harddisk was really busy. The system was a bit unresponsive anyway, because of all the I/O happening. So my theory is that if the I/O is really busy, write() on the stats file blocks for more than 5 seconds, and you get the timeout. Does anyone have a problem with sticking a lot of debugging printout into backend_read_statsfile() in HEAD only? I'm envisioning it starting to dump assorted information including elapsed time, errno values, etc once the loop counter is more than halfway to expiration, which is already a situation that we shouldn't see under normal conditions. No objections here. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Possible Bug in pg_upgrade
Beginning with commit 002c105a0706bd1c1e939fe0f47ecdceeae6c52d pg_upgrade will fail if there are orphaned temp tables in the current database with the message 'old and new databases postgres have a different number of relations' On line 41 of pg_upgrade/info.c pg_upgrade checks that the number of relations are the same but includes orphaned temp tables in the comparison. Is this expected behavior? If so is there a more detailed error message that can be added explain the cause of the failure? It wasn't evident until modified pg_upgrade to show the missing oid's and recognized them as temp tables with oid2name. Dave Byrne Phone: (310) 526-5000 Direct: (310) 526-5021 Email: dby...@mdb.commailto:dby...@mdb.com MDB CAPITAL GROUP LLC Seeing Value Others Do Not, Creating Value Others Can Not. 401 Wilshire Boulevard, Suite 1020 Santa Monica, California 90401 www.mdb.comhttp://www.mdb.com [MDB Capital Group]http://www.mdb.com The IP Investment Bank -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] sha1, sha2 functions into core?
On 10.08.2011 21:45, Peter Eisentraut wrote: On ons, 2011-08-10 at 14:26 -0400, Robert Haas wrote: On Wed, Aug 10, 2011 at 2:24 PM, Andrew Dunstanand...@dunslane.net wrote: It's come up before: http://archives.postgresql.org/pgsql-hackers/2009-09/msg01293.php I was about to wonder out loud if we might be trying to hit a moving target I think we are dealing with a lot more moving targets than adding a new version of SHA every 12 to 15 years. Moving to a something more modern for internal use is one thing. But regarding the user-visible md5() function, how about we jump off this treadmill and remove it altogether? And provide a backwards-compatible function in pgcrypto. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] sha1, sha2 functions into core?
On Wed, Aug 10, 2011 at 21:02, Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote: On 10.08.2011 21:45, Peter Eisentraut wrote: On ons, 2011-08-10 at 14:26 -0400, Robert Haas wrote: On Wed, Aug 10, 2011 at 2:24 PM, Andrew Dunstanand...@dunslane.net wrote: It's come up before: http://archives.postgresql.org/pgsql-hackers/2009-09/msg01293.php I was about to wonder out loud if we might be trying to hit a moving target I think we are dealing with a lot more moving targets than adding a new version of SHA every 12 to 15 years. Moving to a something more modern for internal use is one thing. But regarding the user-visible md5() function, how about we jump off this treadmill and remove it altogether? And provide a backwards-compatible function in pgcrypto. -1. There are certainly a number of perfectly valid use-cases for md5, and it would probably break a *lot* of applications to remove it. +1 for adding the SHA functions to core as choices, of course. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Possible Bug in pg_upgrade
Dave Byrne dby...@mdb.com writes: Beginning with commit 002c105a0706bd1c1e939fe0f47ecdceeae6c52d pg_upgrade will fail if there are orphaned temp tables in the current database with the message 'old and new databases postgres have a different number of relations' On line 41 of pg_upgrade/info.c pg_upgrade checks that the number of relations are the same but includes orphaned temp tables in the comparison. Is this expected behavior? Seems like an oversight. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: Fast GiST index build
Manual and readme updates. -- With best regards, Alexander Korotkov. gist_fast_build-0.12.0.patch.gz Description: GNU Zip compressed data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: Fast GiST index build
On 10.08.2011 13:19, Alexander Korotkov wrote: Hi! Here is last verion of the patch. List of changes: 1) Neighbor relocation and prefetch were removed. They will be supplied as separate patches. unloadNodeBuffers() is now dead code. 2) Final emptying now using standart lists of all buffers by levels. 3) Automatic switching again use simple comparison of index size and effective_cache_size. LEAF_PAGES_STATS_* are unused now. Should avoid calling smgrnblocks() on every tuple, the overhead of that could add up. 4) Some renames. In particular GISTLoadedPartItem to GISTBufferingInsertStack. 5) Some comments were corrected and some were added. 6) pgindent 7) rebased with head Readme update and user documentation coming soon. I wonder, how hard would it be to merge gistBufferingBuildPlaceToPage() with the gistplacetopage() function used in the main codepath? There's very little difference between them, and it would be nice to maintain just one function. At the very least I think there should be a comment in both along the lines of NOTE: if you change this function, make sure you update (the other function) as well! In gistbuild(), in the final emptying stage, there's this special-case handling for the root block before looping through the buffers in the buffersOnLevels lists: for (;;) { nodeBuffer = getNodeBuffer(gfbb, buildstate.giststate, GIST_ROOT_BLKNO, InvalidOffsetNumber, NULL, false); if (!nodeBuffer || nodeBuffer-blocksCount = 0) break; MemoryContextSwitchTo(gfbb-context); gfbb-bufferEmptyingStack = lcons(nodeBuffer, gfbb-bufferEmptyingStack); MemoryContextSwitchTo(buildstate.tmpCtx); processEmptyingStack(buildstate.giststate, insertstate); } What's the purpose of that? Wouldn't the loop through buffersOnLevels lists take care of the root node too? The calculations in initBuffering() desperately need comments. As does the rest of the code too, but the heuristics in that function are particularly hard to understand without some explanation. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] wal_sender_delay (WalSndDelay) has served its purpose
AFAICS we could get rid of WalSndDelay: there is no longer any reason for the walsender loop to wake up unless it's received a latch event. (Its WaitLatch call is missing WL_POSTMASTER_DEATH right now, but that is easily fixed.) Is anyone sufficiently attached to that GUC to not want to see it go away? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] wal_sender_delay (WalSndDelay) has served its purpose
On Wed, Aug 10, 2011 at 10:23 PM, Tom Lane t...@sss.pgh.pa.us wrote: AFAICS we could get rid of WalSndDelay: there is no longer any reason for the walsender loop to wake up unless it's received a latch event. (Its WaitLatch call is missing WL_POSTMASTER_DEATH right now, but that is easily fixed.) Is anyone sufficiently attached to that GUC to not want to see it go away? Please remove. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] mosbench revisited
Robert Haas robertmh...@gmail.com writes: However, it doesn't really do anything to solve this problem. The problem here is not that the size of the relation is changing too frequently - indeed, it's not changing at all in this test case. The problem is rather that testing whether or not the size has in fact changed is costing too much. You were complaining about the cost of the cache maintenance, that in the current scheme of things would have to be called far too often. Reducing the relation extension trafic would allow, I guess, to have something more expensive to reset the cache ― it would not happen much. Now, it could be that the idea is only worth “the electrons it's written on” if all the relation extensions are taken care of by a background process... The reason why we are testing the size of the relation here rather than just using reltuples is because the relation might have been extended since it was last analyzed. We can't count on analyze to run often enough to avoid looking at the actual file size. If the file's grown, we have to scale the number of tuples up proportional to the growth in relpages. Could we send the same message to the stat collector as autoanalyze is doing each time we extend a relation? -- dim -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Possible Bug in pg_upgrade
Attached is a patch that skips orphaned temporary relations in pg_upgrade if they are lingering around. It works for 9.0 - 9.1 upgrades, however I wasn't able to tell when pg_class.relistemp was added so if it was unavailable in versions prior to 9.0 an additional check will have to be added. Thanks Dave Byrne -Original Message- From: Tom Lane [mailto:t...@sss.pgh.pa.us] Sent: Wednesday, August 10, 2011 12:29 PM To: Dave Byrne Cc: pgsql-hackers@postgresql.org; Bruce Momjian Subject: Re: [HACKERS] Possible Bug in pg_upgrade Dave Byrne dby...@mdb.com writes: Beginning with commit 002c105a0706bd1c1e939fe0f47ecdceeae6c52d pg_upgrade will fail if there are orphaned temp tables in the current database with the message 'old and new databases postgres have a different number of relations' On line 41 of pg_upgrade/info.c pg_upgrade checks that the number of relations are the same but includes orphaned temp tables in the comparison. Is this expected behavior? Seems like an oversight. regards, tom lane pg_upgrade_tmp.patch Description: pg_upgrade_tmp.patch -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Possible Bug in pg_upgrade
Dave Byrne dby...@mdb.com writes: Attached is a patch that skips orphaned temporary relations in pg_upgrade if they are lingering around. It works for 9.0 - 9.1 upgrades, however I wasn't able to tell when pg_class.relistemp was added so if it was unavailable in versions prior to 9.0 an additional check will have to be added. I'm inclined to think the correct fix is to revert the assumption that the old and new databases contain exactly the same number of tables ... that seems to have a lot of potential failure modes besides this one. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] wal_sender_delay (WalSndDelay) has served its purpose
On Wed, Aug 10, 2011 at 5:29 PM, Simon Riggs si...@2ndquadrant.com wrote: On Wed, Aug 10, 2011 at 10:23 PM, Tom Lane t...@sss.pgh.pa.us wrote: AFAICS we could get rid of WalSndDelay: there is no longer any reason for the walsender loop to wake up unless it's received a latch event. (Its WaitLatch call is missing WL_POSTMASTER_DEATH right now, but that is easily fixed.) Is anyone sufficiently attached to that GUC to not want to see it go away? Please remove. +1! -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] mosbench revisited
2011/8/10 Dimitri Fontaine dfonta...@hi-media.com: Robert Haas robertmh...@gmail.com writes: However, it doesn't really do anything to solve this problem. The problem here is not that the size of the relation is changing too frequently - indeed, it's not changing at all in this test case. The problem is rather that testing whether or not the size has in fact changed is costing too much. You were complaining about the cost of the cache maintenance, that in the current scheme of things would have to be called far too often. Reducing the relation extension trafic would allow, I guess, to have something more expensive to reset the cache -- it would not happen much. That's an interesting point. I confess to having no idea how frequent relation extension is now, or how much overhead we could add before it started to get noticeable. It seems likely to me that we can design something that is sufficiently lightweight that we don't need to worry about reducing the relation extension traffic first, but I don't know that for certain. The reason why we are testing the size of the relation here rather than just using reltuples is because the relation might have been extended since it was last analyzed. We can't count on analyze to run often enough to avoid looking at the actual file size. If the file's grown, we have to scale the number of tuples up proportional to the growth in relpages. Could we send the same message to the stat collector as autoanalyze is doing each time we extend a relation? Mmm, maybe. I don't really like the idea of getting the stats collector involved in this; that seems like it will likely add complexity without any corresponding benefit. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] mosbench revisited
Excerpts from Robert Haas's message of mié ago 10 21:27:08 -0400 2011: 2011/8/10 Dimitri Fontaine dfonta...@hi-media.com: Robert Haas robertmh...@gmail.com writes: However, it doesn't really do anything to solve this problem. The problem here is not that the size of the relation is changing too frequently - indeed, it's not changing at all in this test case. The problem is rather that testing whether or not the size has in fact changed is costing too much. You were complaining about the cost of the cache maintenance, that in the current scheme of things would have to be called far too often. Reducing the relation extension trafic would allow, I guess, to have something more expensive to reset the cache -- it would not happen much. That's an interesting point. I confess to having no idea how frequent relation extension is now, or how much overhead we could add before it started to get noticeable. I have seen several cases on which it (rel extension) is really troublesome. Think insertion on large bytea fields -- the toast table's extension lock was heavily contended. When this was in 8.1 we had quite some trouble because of the locking involving some shared buffer pool lwlock which is fortunately now partitioned; even without that, it is a major contention point. These were insert-only tables, so pages are always full except the last one. The reason why we are testing the size of the relation here rather than just using reltuples is because the relation might have been extended since it was last analyzed. We can't count on analyze to run often enough to avoid looking at the actual file size. If the file's grown, we have to scale the number of tuples up proportional to the growth in relpages. Could we send the same message to the stat collector as autoanalyze is doing each time we extend a relation? Mmm, maybe. I don't really like the idea of getting the stats collector involved in this; that seems like it will likely add complexity without any corresponding benefit. Yeah, it adds delay and uncertainty (UDP being lossy). -- Álvaro Herrera alvhe...@commandprompt.com The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] mosbench revisited
On Wed, Aug 10, 2011 at 9:46 PM, Alvaro Herrera alvhe...@commandprompt.com wrote: Excerpts from Robert Haas's message of mié ago 10 21:27:08 -0400 2011: 2011/8/10 Dimitri Fontaine dfonta...@hi-media.com: Robert Haas robertmh...@gmail.com writes: However, it doesn't really do anything to solve this problem. The problem here is not that the size of the relation is changing too frequently - indeed, it's not changing at all in this test case. The problem is rather that testing whether or not the size has in fact changed is costing too much. You were complaining about the cost of the cache maintenance, that in the current scheme of things would have to be called far too often. Reducing the relation extension trafic would allow, I guess, to have something more expensive to reset the cache -- it would not happen much. That's an interesting point. I confess to having no idea how frequent relation extension is now, or how much overhead we could add before it started to get noticeable. I have seen several cases on which it (rel extension) is really troublesome. Think insertion on large bytea fields -- the toast table's extension lock was heavily contended. When this was in 8.1 we had quite some trouble because of the locking involving some shared buffer pool lwlock which is fortunately now partitioned; even without that, it is a major contention point. These were insert-only tables, so pages are always full except the last one. Yeah. I think there's a good argument to be made that we should extend relations in larger chunks, both for this reason and to avoid fragmenting the underlying file. But in this case, the fact that relation extension is such a heavyweight operation almost works to our advantage. I mean, if we're already acquiring a heavyweight lock (and they're not called heavyweight for nothing), making at least one system call which updates filesystem metadata, and then releasing our heavyweight lock, the additional overhead of setting a flag in shared memory or somesuch might well be unnoticeable. Or it might not - hard to know without testing. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers