Re: [HACKERS] Unlogged tables, persistent kind
On Apr 24, 2011, at 1:22 PM, Simon Riggs si...@2ndquadrant.com wrote: Unlogged tables are a good new feature. Thanks. I noticed Bruce had mentioned they were the equivalent of NoSQL, which I don't really accept. Me neither. I thought that was poorly said. Heap blocks would be zeroed if they were found to be damaged, following a crash. The problem is not so much the blocks that are damaged (e.g. half-written, torn page) but the ones that were never written at all. For example, read page A, read page B, update tuple on page A putting new version on page B, write one but not both of A and B out to the O/S, crash. Everything on disk is a valid page, but they are not coherent taken as a whole. It's normally XLOG replay that fixes this type of situation... I thought about this problem a bit and I think you could perhaps deal with it by having some sort of partially logged table, where we would XLOG just enough to know which blocks or relations had been modified and only nuke enough data to be certain of being safe. But it isn't clear that there is much use case for this, especially because I think it would give up nearly all the performance benefit. I do think it might be useful to have an unlogged index on a logged table, somehow frobnicated so that on a crash the index is known invalid and not used until a REINDEX is performed. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unlogged tables, persistent kind
The only data we can't rebuild it's the heap. So what about an option for UNlogged indexes on a LOGged table? It would always preserve data, and it would 'only' cost a rebuilding of the indexes in case of an unclean shutdown. I think it would give a boost in performance for all those cases where the IO (especially random IO) is caused by the indexes, and it doesn't look too complicated (but maybe I'm missing something). I proposed the unlogged to logged patch (BTW has anyone given a look at it?) because we partition data based on a timestamp, and we can risk loosing the last N minutes of data, but after N minutes we want to know data will always be there, so we would like to set a partition table to 'logged'. Leonardo -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unlogged tables, persistent kind
On Mon, Apr 25, 2011 at 8:36 AM, Leonardo Francalanci m_li...@yahoo.it wrote: The only data we can't rebuild it's the heap. So what about an option for UNlogged indexes on a LOGged table? It would always preserve data, and it would 'only' cost a rebuilding of the indexes in case of an unclean shutdown. I think it would give a boost in performance for all those cases where the IO (especially random IO) is caused by the indexes, and it doesn't look too complicated (but maybe I'm missing something). I proposed the unlogged to logged patch (BTW has anyone given a look at it?) because we partition data based on a timestamp, and we can risk loosing the last N minutes of data, but after N minutes we want to know data will always be there, so we would like to set a partition table to 'logged'. I agree that unlogged indexes on a logged heap are better for resilience and are likely to be the best first step. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unlogged tables, persistent kind
On Mon, Apr 25, 2011 at 8:14 AM, Robert Haas robertmh...@gmail.com wrote: On Apr 24, 2011, at 1:22 PM, Simon Riggs si...@2ndquadrant.com wrote: Unlogged tables are a good new feature. Thanks. I noticed Bruce had mentioned they were the equivalent of NoSQL, which I don't really accept. Me neither. I thought that was poorly said. Heap blocks would be zeroed if they were found to be damaged, following a crash. The problem is not so much the blocks that are damaged (e.g. half-written, torn page) but the ones that were never written at all. For example, read page A, read page B, update tuple on page A putting new version on page B, write one but not both of A and B out to the O/S, crash. Everything on disk is a valid page, but they are not coherent taken as a whole. It's normally XLOG replay that fixes this type of situation... Not really sure it matters what the cause of data loss is, does it? The zeroing of the blocks definitely causes data loss but the intention is to bring the table back to a consistent physical state, not to in any way repair the data loss. Repeating my words above, this proposed option trades potential minor data loss for performance. The amount of data loss on a big table will be 1% of the data loss caused by truncating the whole table. This is important on big tables where reloading from a backup might take a long time. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] wrong hint message for ALTER FOREIGN TABLE
I noticed that ALTER FOREIGN TABLE RENAME TO emits a wrong hint message if the object was not a foreign table. ISTM that the hint message is not necessary there. Attached patch removes the hint message. Steps to reproduce the situation: postgres=# CREATE FOREIGN TABLE foo () SERVER file_server; postgres=# ALTER FOREIGN TABLE foo RENAME TO bar; ERROR: foo is not a foreign table HINT: Use ALTER FOREIGN TABLE instead. Regards, -- Shigeru Hanada diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 7660114..09f3f4e 100644 *** a/src/backend/commands/tablecmds.c --- b/src/backend/commands/tablecmds.c *** RenameRelation(Oid myrelid, const char * *** 2268,2275 ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg(\%s\ is not a foreign table, ! RelationGetRelationName(targetrelation)), !errhint(Use ALTER FOREIGN TABLE instead.))); /* * Don't allow ALTER TABLE on composite types. We want people to use ALTER --- 2268,2274 ereport(ERROR, (errcode(ERRCODE_WRONG_OBJECT_TYPE), errmsg(\%s\ is not a foreign table, ! RelationGetRelationName(targetrelation; /* * Don't allow ALTER TABLE on composite types. We want people to use ALTER -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] wrong hint message for ALTER FOREIGN TABLE
On 25 April 2011 10:06, Shigeru Hanada han...@metrosystems.co.jp wrote: I noticed that ALTER FOREIGN TABLE RENAME TO emits a wrong hint message if the object was not a foreign table. ISTM that the hint message is not necessary there. Attached patch removes the hint message. Steps to reproduce the situation: postgres=# CREATE FOREIGN TABLE foo () SERVER file_server; postgres=# ALTER FOREIGN TABLE foo RENAME TO bar; ERROR: foo is not a foreign table HINT: Use ALTER FOREIGN TABLE instead. Don't you mean that you created a regular table first, then tried to rename it as a foreign table? Your example here will be successful without the error. -- Thom Brown Twitter: @darkixion IRC (freenode): dark_ixion Registered Linux user: #516935 EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] wrong hint message for ALTER FOREIGN TABLE
(2011/04/25 19:34), Thom Brown wrote: Don't you mean that you created a regular table first, then tried to rename it as a foreign table? Your example here will be successful without the error. Oops, you are right. Right procedure to reproduce is: postgres=# CREATE TABLE foo(c1 int); CREATE TABLE postgres=# ALTER FOREIGN TABLE foo RENAME TO bar; ERROR: foo is not a foreign table HINT: Use ALTER FOREIGN TABLE instead. Regards, -- Shigeru Hanada -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Windows 64 bit warnings
One is at src/interfaces/ecpg/ecpglib/sqlda.c:231, which is this line: sqlda-sqlvar[i].sqlformat = (char *) (long) PQfformat(res, i); I'm not clear about the purpose of this anyway. It doesn't seem to After not hearing from the author I just commented out that line. I cannot find any explanation what should be stored in sqlformat. be used anywhere, and the comment on the field says it for future Gosh, I didn't know that our own source says it's reserved for future use. I guess that makes removing the statement even more of an option. Michael -- Michael Meskes Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org) Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org Jabber: michael.meskes at googlemail dot com VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unlogged tables, persistent kind
On Mon, Apr 25, 2011 at 5:04 AM, Simon Riggs si...@2ndquadrant.com wrote: On Mon, Apr 25, 2011 at 8:14 AM, Robert Haas robertmh...@gmail.com wrote: On Apr 24, 2011, at 1:22 PM, Simon Riggs si...@2ndquadrant.com wrote: Unlogged tables are a good new feature. Thanks. I noticed Bruce had mentioned they were the equivalent of NoSQL, which I don't really accept. Me neither. I thought that was poorly said. Heap blocks would be zeroed if they were found to be damaged, following a crash. The problem is not so much the blocks that are damaged (e.g. half-written, torn page) but the ones that were never written at all. For example, read page A, read page B, update tuple on page A putting new version on page B, write one but not both of A and B out to the O/S, crash. Everything on disk is a valid page, but they are not coherent taken as a whole. It's normally XLOG replay that fixes this type of situation... Not really sure it matters what the cause of data loss is, does it? The zeroing of the blocks definitely causes data loss but the intention is to bring the table back to a consistent physical state, not to in any way repair the data loss. Right, but the trick is how you identify which blocks you need to zero. You used the word damaged, which to me implied that the block had been modified in some way but ended up with other than the expected contents, so that something like a CRC check might detect the problem. My point (as perhaps you already understand) is that you could easily have a situation where every block in the table passes a hypothetical block-level CRC check, but the table as a whole is still damaged because update chains aren't coherent. So you need some kind of mechanism for identifying which portions of the table you need to zero to get back to a guaranteed-coherent state. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unlogged tables, persistent kind
On Mon, Apr 25, 2011 at 3:36 AM, Leonardo Francalanci m_li...@yahoo.it wrote: The only data we can't rebuild it's the heap. So what about an option for UNlogged indexes on a LOGged table? It would always preserve data, and it would 'only' cost a rebuilding of the indexes in case of an unclean shutdown. I think it would give a boost in performance for all those cases where the IO (especially random IO) is caused by the indexes, and it doesn't look too complicated (but maybe I'm missing something). +1. I proposed the unlogged to logged patch (BTW has anyone given a look at it?) because we partition data based on a timestamp, and we can risk loosing the last N minutes of data, but after N minutes we want to know data will always be there, so we would like to set a partition table to 'logged'. That approach is something I had also given some thought to, and I'm glad to hear that people are thinking about doing it in the real world. I'm planning to look at your patch, but I haven't gotten to it yet, because I'm giving priority to anything that must be done to get 9.1beta1 out the door. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extension Packaging
On Sun, Apr 24, 2011 at 6:03 PM, Tom Lane t...@sss.pgh.pa.us wrote: David E. Wheeler da...@kineticode.com writes: On Apr 24, 2011, at 2:55 PM, Tom Lane wrote: Hmm ... it's sufficient, but I think people are going to be confused as to proper usage if you call two different things the version. In RPM terminology there's a clear difference between version and release; maybe some similar wording should be adopted here? Or use major version versus minor version? I could distribution version =~ s/version/release/; Frankly, the way the terminology is now it's halfway-there already. So distribution semver release 1.1.0 might contain extension semver version 1.0.0. Hrm, Still rather confusing. Yeah. It seems like a bad idea if the distribution name doesn't include sufficient information to tell which version it contains. I had in mind a convention like distribution version x.y.z always contains extension version x.y. Seems like minor version versus major version would be the way to explain that. I think it's a bit awkward that we have to do it this way, though. The installed version of the extension at the SQL level won't match what the user thinks they've installed. Granted, it'll be in the ballpark (1.0 vs 1.0.3, for example) but that's not quite the same thing. I also note that we've moved PDQ from thinking that versions are opaque strings to having pretty specific ideas about how they are going to have to be assigned and managed to avoid maintainer insanity. That suggests to me that at a minimum we need some more documentation here. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] make check in contrib
On Sun, Apr 24, 2011 at 7:18 PM, Peter Eisentraut pete...@gmx.net wrote: I noticed again that make check in contrib doesn't work, so here is a patch to fix it. Perhaps someone wants to fill in the Windows support for it. Naturally, this works only for contrib itself, not for external packages that use pgxs. A secondary issue that actually led to this: I was preparing a Debian package for some module^Wextension^W^Hthing that uses pgxs. The Debian packaging tools (dh, to be exact, for the insiders) have this convenience that by default they examine your makefile and execute the standard targets, if found, in order. So it runs make all, make check, which then fails because of check: @echo 'make check' is not supported. @echo Do 'make install', then 'make installcheck' instead. @exit 1 You can override this, but it still means that everyone who packages an extension will have to re-figure this out. So while this message might be moderately useful (although I'm not sure whether it's guaranteed that the suggestion will always work), I'd rather get rid of it and not have a check target in the pgxs case. I think it might be more useful to have a check target that actually succeeds, even if it does nothing useful. The argument that no check target at all is more useful than a check target that fails with a reasonably informative error message seems week to me. It's only going to be true if - as in the case you mention - external software is directly inspecting the makefile to figure out what to do. And that's a pretty weird case to optimize for. Maybe just change @exit 1 to @exit 0 and call it good? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] smallserial / serial2
On Thu, Apr 21, 2011 at 11:06 AM, Mike Pultz m...@mikepultz.com wrote: And since serial4 and serial8 are simply pseudo-types- effectively there for convenience, I’d argue that it should simply be there for completeness- just because it may be less used, doesn’t mean it shouldn’t be convenient? Right now, smallint is a bit like an unwanted stepchild in the PostgreSQL type system. In addition to the problem you hit here, there are various situations where using smallint requires casts in cases where int4 or int8 would not. Ken Rosensteel even talked about this being an obstacle to Oracle to PostgreSQL migrations, in his talk at PG East (see his slides for details). Generally, I think this is a bad thing. We should be trying to put all types on equal footing, rather than artificially privilege some over others. Unfortunately, this is easier said than done, but I don't think that's a reason to give up trying. So a tentative +1 from me on supporting this. You might want to review: http://wiki.postgresql.org/wiki/Submitting_a_Patch -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] branching for 9.2devel
The recent and wide-ranging formatting curmudgeons thread included suggestions by Tom and myself that we should consider branching the tree immediately after beta1. http://archives.postgresql.org/pgsql-hackers/2011-04/msg01157.php http://archives.postgresql.org/pgsql-hackers/2011-04/msg01162.php This didn't get much commentary, but others have expressed support for similar ideas in the past, so perhaps we should do it? Comments? The other major issue discussed on the thread was as to how frequent and how long CommitFests should be. I don't think we really came to a consensus on that one. I think that's basically a trade-off: if we make CommitFests more frequent and shorter, we can give people feedback more quickly (but I'm not sure that problem is horribly bad anyway - witness that there have been numerous reviews of WIP patches in just the last few weeks while we've been pursuing beta hard) and committers will have more time to work on their own projects, BUT the rejection rate will go up, patch authors will get less help finishing their work, it'll be harder to organize reviewers (see esp. the note by Greg Smith in that regard), and there may be even more of a crush at the end of the release cycle. On balance, I think I prefer the current arrangement, though if we could make the CommitFests a bit shorter I would certainly like that better. I don't know how to make that happen without more reviewers, though. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] make check in contrib
On 04/25/2011 08:53 AM, Robert Haas wrote: The argument that no check target at all is more useful than a check target that fails with a reasonably informative error message seems week to me. +1 (weak too) cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
On 04/25/2011 09:17 AM, Robert Haas wrote: The recent and wide-ranging formatting curmudgeons thread included suggestions by Tom and myself that we should consider branching the tree immediately after beta1. http://archives.postgresql.org/pgsql-hackers/2011-04/msg01157.php http://archives.postgresql.org/pgsql-hackers/2011-04/msg01162.php This didn't get much commentary, but others have expressed support for similar ideas in the past, so perhaps we should do it? Comments? I am on record in the past as supporting earlier branching. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI non-serializable UPDATE performance
On Sun, Apr 24, 2011 at 11:33 PM, Dan Ports d...@csail.mit.edu wrote: On Sat, Apr 23, 2011 at 08:54:31AM -0500, Kevin Grittner wrote: Even though this didn't show any difference in Dan's performance tests, it seems like reasonable insurance against creating a new bottleneck in very high concurrency situations. Dan, do you have a patch for this, or should I create one? Sure, patch is attached. Committed. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] stored procedures
On Fri, Apr 22, 2011 at 11:46 PM, David Christensen da...@endpoint.com wrote: On Apr 22, 2011, at 3:50 PM, Tom Lane wrote: Merlin Moncure mmonc...@gmail.com writes: On Fri, Apr 22, 2011 at 1:28 PM, Peter Eisentraut pete...@gmx.net wrote: It would probably be more reasonable and feasible to have a setup where you can end a transaction in plpgsql but a new one would start right away. ya, that's an idea. Yeah, that's a good thought. Then we'd have a very well-defined collection of state that had to be preserved through such an operation, ie, the variable values and control state of the SP. It also gets rid of the feeling that you ought not be in a transaction when you enter the SP. There's still the problem of whether you can invoke operations such as VACUUM from such an SP. I think we'd want to insist that they terminate the current xact, which is perhaps not too cool. Dumb question, but wouldn't this kind of approach open up a window where (say) datatypes, operators, catalogs, etc, could disappear/change out from under you, being that you're now in a different transaction/snapshot; presuming there is a concurrent transaction from a different backend modifying the objects in question? That's a good question. This is already a problem for functions -- an object you are dependent upon in the function body can disappear at any time. If you grabbed the lock first you're ok, but otherwise you're not and the caller will receive an error. Starting with 8.3 there is plan cache machinery that invalidates plans used inside plpgsql which should prevent the worst problems. If you're cavalier about deleting objects that are used in a lot of functions you can get really burned from a performance standpoint, but that's no different than dealing with functions today. Procedures unlike functions however can no longer rely that catalogs remain static visibility wise through execution for functions. pl_comp.c is full of catalog lookups and that means that some assumptions that are made during compilation that are no longer valid for procedures. A missing table isn't such a big deal, but maybe it's possible to make intermediate changes while a procedure is execution that can cause an expression to parse differently, or not at all (for example, replacing a scalar function with setof)? This could be a minefield of problems or possibly not -- I really just don't know all the details and perhaps some experimentation is in order. One thing that's tempting is to force recompilation upon certain things happening so you can catch this stuff proactively, but plpgsql function compilation is very slow and this approach is probably very complex. Ideally we can just bail from the procedure if external events cause things to go awry. merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] wrong hint message for ALTER FOREIGN TABLE
Shigeru Hanada han...@metrosystems.co.jp writes: I noticed that ALTER FOREIGN TABLE RENAME TO emits a wrong hint message if the object was not a foreign table. ISTM that the hint message is not necessary there. Attached patch removes the hint message. Surely it would be better to make the hint correct (ie, Use ALTER TABLE) rather than just nuke it? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] stored procedures
Merlin Moncure mmonc...@gmail.com wrote: Procedures unlike functions however can no longer rely that catalogs remain static visibility wise through execution for functions. If you start from the perspective that stored procedures are in many respects more like psql scripts than functions, this shouldn't be too surprising. If you have a psql script with multiple database transactions, you know that other processes can change things between transactions. Same deal with SPs. The whole raison d'être for SPs is that there are cases where people need something *different* from functions. While it would be *nice* to leverage plpgsql syntax for a stored procedure language, if it means we have to behave like a function, it's not worth it. -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
* Robert Haas (robertmh...@gmail.com) wrote: On balance, I think I prefer the current arrangement, though if we could make the CommitFests a bit shorter I would certainly like that better. I don't know how to make that happen without more reviewers, though. Given our current method (where we allow authors to update their patches during a CF) I don't see that we need or should try for shorter CFs. If we actually just reviewed patches onces it'd be a very different situation. So, +1 from me for keeping it as-is. I do wonder if this is coming up now just because we're getting closer to a release and people are, unsuprisingly, wishing they had been able to get their fav. patch in before the deadline. :) Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] make check in contrib
Robert Haas robertmh...@gmail.com writes: On Sun, Apr 24, 2011 at 7:18 PM, Peter Eisentraut pete...@gmx.net wrote: I noticed again that make check in contrib doesn't work, so here is a patch to fix it. I think it might be more useful to have a check target that actually succeeds, even if it does nothing useful. That argument seems a bit irrelevant to the proposed patch. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] intermittent FD regression check failure
The sort of failure shown below has happened a few times recently. See recent failures on crake, mastodon and casteroides at http://www.pgbuildfarm.org/cgi-bin/show_failures.pl. It seems harmless enough. Do we need an alternative regression results file, or is there some way to prevent this? cheers andrew *** /home/bf/bfr/root/HEAD/pgsql.4238/../pgsql/src/test/regress/expected/foreign_data.out Fri Apr 1 11:37:02 2011 --- /home/bf/bfr/root/HEAD/pgsql.4238/src/test/regress/results/foreign_data.out Mon Apr 25 09:41:48 2011 *** *** 1088,1098 DROP USER MAPPING FOR regress_test_role SERVER s6; DROP FOREIGN DATA WRAPPER foo CASCADE; NOTICE: drop cascades to 5 other objects ! DETAIL: drop cascades to server s4 drop cascades to user mapping for foreign_data_user drop cascades to server s6 - drop cascades to server s9 - drop cascades to user mapping for unprivileged_role DROP SERVER s8 CASCADE; NOTICE: drop cascades to 2 other objects DETAIL: drop cascades to user mapping for foreign_data_user --- 1088,1098 DROP USER MAPPING FOR regress_test_role SERVER s6; DROP FOREIGN DATA WRAPPER foo CASCADE; NOTICE: drop cascades to 5 other objects ! DETAIL: drop cascades to server s9 ! drop cascades to user mapping for unprivileged_role ! drop cascades to server s4 drop cascades to user mapping for foreign_data_user drop cascades to server s6 DROP SERVER s8 CASCADE; NOTICE: drop cascades to 2 other objects DETAIL: drop cascades to user mapping for foreign_data_user -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
Robert Haas robertmh...@gmail.com wrote: The recent and wide-ranging formatting curmudgeons thread included suggestions by Tom and myself that we should consider branching the tree immediately after beta1. My take is that it should be branched as soon as a committer would find it useful to commit something destined for 9.2 instead of 9.1. If *any* committer feels it would be beneficial, that seems like prima facie evidence that it is needed, barring a convincing argument to the contrary. The other major issue discussed on the thread was as to how frequent and how long CommitFests should be. On balance, I think I prefer the current arrangement, though if we could make the CommitFests a bit shorter I would certainly like that better. I don't know how to make that happen without more reviewers, though. Agreed. It is hard to picture doing shorter commit fests without that just pushing more of the initial review burden to the committers. Besides the normal herding cats dynamic, there is that matter of schedules in an all-volunteer project. When I've managed CFs, there have been people who were on vacation or under the deadline to complete a major paper during the first week of the CF who were able to contribute later. Some non-committer reviewers were able to complete review of one patch and move on to others. During the weeks of a single CF some patches go through multiple critiques which send them back to the author, so I'm not sure how much a shorter cycle would help with that issue for non-committer reviews. Perhaps we will get some of the stated benefits of shorter CF cycles as reviewers become more skilled and patches get to the reviewers with fewer problems. Maybe we could encourage reviewers to follow patches which they have moved to Ready for Committer status, to see what the committers find that they missed, to help develop better skills. -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
Robert Haas robertmh...@gmail.com writes: The recent and wide-ranging formatting curmudgeons thread included suggestions by Tom and myself that we should consider branching the tree immediately after beta1. http://archives.postgresql.org/pgsql-hackers/2011-04/msg01157.php http://archives.postgresql.org/pgsql-hackers/2011-04/msg01162.php This didn't get much commentary, but others have expressed support for similar ideas in the past, so perhaps we should do it? Comments? One small issue that would have to be resolved before branching is whether and when to do a final pgindent run for 9.1. Seems like the alternatives would be: 1. Don't do anything more, be happy with the one run done already. 2. Do another run just before branching. 3. Make concurrent runs against HEAD and 9.1 branch sometime later. I don't much care for #3 because it would also affect whatever developmental work had been done to that point, and thus have a considerable likelihood of causing merge problems for WIP patches. Not sure if enough has happened to really require #2. But a much more significant issue is that I don't see a lot of point in branching until we are actually ready to start active 9.2 development. So unless you see this as a vehicle whereby committers get to start hacking 9.2 but nobody else does, there's no point in cutting a branch until shortly before a CommitFest opens. I'm not aware that we've set any dates for 9.2 CommitFests yet ... The other major issue discussed on the thread was as to how frequent and how long CommitFests should be. I don't think we really came to a consensus on that one. Yeah, it did not seem like there was enough evidence to justify a change, and Greg's comments were discouraging. (Though you've run more fests than he has, so I was surprised that you weren't arguing similarly.) Should we consider scheduling one short-cycle fest during 9.2, just to see whether it works? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] stored procedures
On Mon, Apr 25, 2011 at 9:18 AM, Kevin Grittner kevin.gritt...@wicourts.gov wrote: Merlin Moncure mmonc...@gmail.com wrote: Procedures unlike functions however can no longer rely that catalogs remain static visibility wise through execution for functions. If you start from the perspective that stored procedures are in many respects more like psql scripts than functions, this shouldn't be too surprising. If you have a psql script with multiple database transactions, you know that other processes can change things between transactions. Same deal with SPs. The whole raison d'être for SPs is that there are cases where people need something *different* from functions. While it would be *nice* to leverage plpgsql syntax for a stored procedure language, if it means we have to behave like a function, it's not worth it. As noted above it would be really nice if the SPI interface could be recovered for use in writing procedures. plpgsql the language is less of a sure thing, but it would be truly unfortunate if it couldn't be saved on grounds of user-retraining alone. If a sneaky injection of transaction manipulation gets the job done without rewriting the entire then great, but it's an open question if that's possible, and I'm about 2 orders of magnitude unfamiliar with the code to say either way. I'm inclined to just poke around and see what breaks. OTOH, if you go the fully textual route you can get away with doing things that are not at all sensible in the plpgsql world (or at least not without a serious rethink of how it works), like connecting to databases mid-procedure, a cleaner attack at things like running 'CLUSTER', than the flush transaction state methodology above. So I see we have three choices: 1. recover SPI, recover plpgsql (and other pls), transaction flush command (SPI_flush()?) 2. recover SPI, replace plpgsql (with what?) 3. no spi, custom built language, most flexibility, database reconnects, aka, 'tabula rasa' #1 is probably the easiest and most appealing on a lot of levels, but fraught with technical danger, and the most limiting? merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] intermittent FD regression check failure
Andrew Dunstan and...@dunslane.net writes: The sort of failure shown below has happened a few times recently. See recent failures on crake, mastodon and casteroides at http://www.pgbuildfarm.org/cgi-bin/show_failures.pl. It seems harmless enough. Do we need an alternative regression results file, or is there some way to prevent this? There's no real guarantee about the order in which dependency.c lists the dependencies (it's going to depend on the order in which the rows happen to be stored in pg_depend). There are some other regression tests that cope with this by temporarily doing \set VERBOSITY terse --- if the failures bug you, I'd suggest that, not trying to make an alternate expected file for every order observed in the field. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
On Mon, Apr 25, 2011 at 3:45 PM, Tom Lane t...@sss.pgh.pa.us wrote: One small issue that would have to be resolved before branching is whether and when to do a final pgindent run for 9.1. Seems like the alternatives would be: If the tools become easy to run is it possible we cold get to the point where we do an indent run on every commit? This wold require a stable list of system symbols plus the tool would need to add any new symbols added by the patch. As long as the tool produced consistent output I don't see that it would produce the spurious merge conflicts we've been afraid of in the past. Those would only occur if a patch went in without pgindent being run, someone developed a patch against that tree, then pgindent was run before merging that patch. As long as it's run on every patch on commit it shouldn't cause those problems since nobody could use a non-pgindented code as their base. Personally I've never really liked the pgindent run. White-space always seemed like the least interesting of the code style issues, none of which seemed terribly important compared to the more important things like staying warning-clean and defensive coding rules. But if we're going to do it letting things diverge for a whole release and then running it once a year seems the worst of both worlds. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] make check in contrib
On Mon, Apr 25, 2011 at 10:21 AM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: On Sun, Apr 24, 2011 at 7:18 PM, Peter Eisentraut pete...@gmx.net wrote: I noticed again that make check in contrib doesn't work, so here is a patch to fix it. I think it might be more useful to have a check target that actually succeeds, even if it does nothing useful. That argument seems a bit irrelevant to the proposed patch. How so? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
Greg Stark gsst...@mit.edu writes: If the tools become easy to run is it possible we cold get to the point where we do an indent run on every commit? This wold require a stable list of system symbols plus the tool would need to add any new symbols added by the patch. As long as the tool produced consistent output I don't see that it would produce the spurious merge conflicts we've been afraid of in the past. Those would only occur if a patch went in without pgindent being run, someone developed a patch against that tree, then pgindent was run before merging that patch. As long as it's run on every patch on commit it shouldn't cause those problems since nobody could use a non-pgindented code as their base. No, not at all, because you're ignoring the common case of a series of dependent patches that are submitted in advance of the first one having been committed. To get to the point where we could do things that way, it would have to be the case that every developer could run pgindent locally and get the same results that the committer would get. Maybe we'll get there someday, and we should certainly try. But we're not nearly close enough to be considering changing policy on that basis. Personally I've never really liked the pgindent run. If everybody followed roughly the same coding/layout standards without prompting, we'd not need it. But they don't so we do. I think pgindent gets a not-trivial share of the credit for the frequently-mentioned fact that the PG sources are pretty readable. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] make check in contrib
Robert Haas robertmh...@gmail.com writes: On Mon, Apr 25, 2011 at 10:21 AM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: On Sun, Apr 24, 2011 at 7:18 PM, Peter Eisentraut pete...@gmx.net wrote: I noticed again that make check in contrib doesn't work, so here is a patch to fix it. I think it might be more useful to have a check target that actually succeeds, even if it does nothing useful. That argument seems a bit irrelevant to the proposed patch. How so? The proposed patch is to fix it, not remove it. Surely that's more useful than a no-op target. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] fsync reliability
On 04/24/2011 10:06 PM, Daniel Farina wrote: On Thu, Apr 21, 2011 at 8:51 PM, Greg Smithg...@2ndquadrant.com wrote: There's still the fsync'd a data block but not the directory entry yet issue as fall-out from this too. Why doesn't PostgreSQL run into this problem? Because the exact code sequence used is this one: open write fsync close And Linux shouldn't ever screw that up, or the similar rename path. Here's what the close man page says, from http://linux.die.net/man/2/close : Theodore Ts'o addresses this *exact* sequence of events, and suggests if you want that rename to definitely stick that you must fsync the directory: http://www.linuxfoundation.org/news-media/blogs/browse/2009/03/don%E2%80%99t-fear-fsync Not exactly. That's talking about the sequence used for creating a file, plus a rename. When new WAL files are being created, I believe the ugly part of this is avoided. The path when WAL files are recycled using rename does seem to be the one with the most likely edge case. The difficult case Tso's discussion is trying to satisfy involves creating a new file and then swapping it for an old one atomically. PostgreSQL never does that exactly. It creates new files, pads them with zeros, and then starts writing to them; it also renames old files that are already of the correctly length. Combined with the fact that there are always fsyncs after writes to the files, and this case really isn't exactly the same as any of the others people are complaining about. -- Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
On Mon, Apr 25, 2011 at 11:03 AM, Greg Stark gsst...@mit.edu wrote: If the tools become easy to run is it possible we cold get to the point where we do an indent run on every commit? This wold require a stable list of system symbols plus the tool would need to add any new symbols added by the patch. Methinks there'd need to be an experiment run where pgindent is run each time on some sort of parallel tree for a little while, to let people get some feel for what changes it introduces. Unfortunately, I'd fully expect there to be some interference between patches. Your patch changes the indentation of the code a little, breaking the patch I wanted to submit just a little later. And, by the way, I had already submitted my patch. So you broke my patch, even though mine was contributed first. That seems a little antisocial... -- When confronted by a difficult problem, solve it by reducing it to the question, How would the Lone Ranger handle this? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
On Mon, Apr 25, 2011 at 10:45 AM, Tom Lane t...@sss.pgh.pa.us wrote: One small issue that would have to be resolved before branching is whether and when to do a final pgindent run for 9.1. Seems like the alternatives would be: 1. Don't do anything more, be happy with the one run done already. 2. Do another run just before branching. 3. Make concurrent runs against HEAD and 9.1 branch sometime later. I don't much care for #3 because it would also affect whatever developmental work had been done to that point, and thus have a considerable likelihood of causing merge problems for WIP patches. Not sure if enough has happened to really require #2. I'd vote for #1, unless by doing #2 we can fix the problems created by omission of some typedefs from the symbol tables emitted by newer gcc versions. But a much more significant issue is that I don't see a lot of point in branching until we are actually ready to start active 9.2 development. So unless you see this as a vehicle whereby committers get to start hacking 9.2 but nobody else does, there's no point in cutting a branch until shortly before a CommitFest opens. I'm not aware that we've set any dates for 9.2 CommitFests yet ... That doesn't strike me as a condition prerequisite for opening the tree. If anything, I'd say we ought to decide first when we'll be open for development (current question) and then schedule CommitFests around that. And I do think there is some value in having the tree open even if we haven't gotten the schedule quite hammered out yet, because even if we don't have any formal process in place to be working through the 9.2 queue, some people might choose to work on it anyway. The other major issue discussed on the thread was as to how frequent and how long CommitFests should be. I don't think we really came to a consensus on that one. Yeah, it did not seem like there was enough evidence to justify a change, and Greg's comments were discouraging. (Though you've run more fests than he has, so I was surprised that you weren't arguing similarly.) Should we consider scheduling one short-cycle fest during 9.2, just to see whether it works? Well, I basically think Greg is right, but the process is so darn much work that I don't want to be too quick to shut down ideas for improvement. If we do a one-week CommitFest, then there is time for ONE review. Either a reviewer will do it, and no committer will look at it, or the other way around, but it will not get the level of attention that it does today. There is a huge amount of work involved on getting up to speed on a patch, and so it really makes a lot more sense to me to do it in a sustained push than in little dribs and drabs. I have to think my productivity would be halved by spending a week on it and then throwing in the towel. I'm inclined to suggest that we just go ahead and schedule five CommitFests, using the same schedule we have used for the last couple of releases, but with one more inserted at the front end: May 15, 2011 - June 14, 2011 July 15, 2011 - August 14, 2011 September 15, 2011 - October 14, 2011 November 15, 2011 - December 14, 2011 January 15, 2012 - February 14, 2012 I also think we should also publicize as widely as possible that design proposals are welcome any time. Maybe that's not what we've said in the past, but I think it's the new normal, and we should make sure people know that. And I think we should reaffirm our previous commitment not to accept new, previously-unreviewed large patches in the last CommitFest. If anything we should strengthen it in some way. The crush of 100 patches in the last CF of the 9.1 cycle was entirely due to people waiting until the last minute, and a lot of that stuff was pretty half-baked, including a bunch of things that got committed after substantial further baking that should properly have been done much sooner. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] make check in contrib
On Mon, Apr 25, 2011 at 11:22 AM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: On Mon, Apr 25, 2011 at 10:21 AM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: On Sun, Apr 24, 2011 at 7:18 PM, Peter Eisentraut pete...@gmx.net wrote: I noticed again that make check in contrib doesn't work, so here is a patch to fix it. I think it might be more useful to have a check target that actually succeeds, even if it does nothing useful. That argument seems a bit irrelevant to the proposed patch. How so? The proposed patch is to fix it, not remove it. Surely that's more useful than a no-op target. Oh, that's different... never mind. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
On Mon, Apr 25, 2011 at 11:32 AM, Christopher Browne cbbro...@gmail.com wrote: Methinks there'd need to be an experiment run where pgindent is run each time on some sort of parallel tree for a little while, to let people get some feel for what changes it introduces. The point is that if the tools worked everywhere, the same, then it it should be run *before* the commit is finalized (git has a hundred+1 ways to get this to happen, be creative). So if you ever ran it on a $COMMIT from the published tree, it would never do anything. From the sounds of it though, it's not quite ready for that. Unfortunately, I'd fully expect there to be some interference between patches. Your patch changes the indentation of the code a little, breaking the patch I wanted to submit just a little later. And, by the way, I had already submitted my patch. So you broke my patch, even though mine was contributed first. But if the only thing changed was the indentation level (because $PATCH2 wrapped a section of code your $PATCH1 changes completely in a new block, or removed a block level), git tools are pretty good at handling that. So, if everything is *always* pgindent clean, that means your new patch is too, and the only conflicting white-space-only change would be a complete block-level indentation (easily handled). And you still have those block-level indentation changes even if not using pgindent. Of course, that all depends on: 1) pgindent being work everywhere, exactly the same 2) Discipline of all new published commits being pgindent clean. a. -- Aidan Van Dyk Create like a god, ai...@highrise.ca command like a king, http://www.highrise.ca/ work like a slave. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] fsync reliability
On 04/23/2011 09:58 AM, Matthew Woodcraft wrote: As far as I can make out, the current situation is that this fix (the auto_da_alloc mount option) doesn't work as advertised, and the ext4 maintainers are not treating this as a bug. See https://bugzilla.kernel.org/show_bug.cgi?id=15910 I agree with the resolution that this isn't a bug. As pointed out there, XFS does the same thing, and this behavior isn't going away any time soon. Leaving behind zero-length files in situations where developers tried to optimize away a necessary fsync happens. Here's the part where the submitter goes wrong: We first added a fsync() call for each extracted file. But scattered fsyncs resulted in a massive performance degradation during package installation (factor 10 or more, some reported that it took over an hour to unpack a linux-headers-* package!) In order to reduce the I/O performance degradation, fsync calls were deferred... Stop right there; the slow path was the only one that had any hope of being correct. It can actually slow things by a factor of 100X or more, worst-case. So, we currently have the choice between filesystem corruption or major performance loss: yes, you do. Writing files is tricky and it can either be slow or safe. If you're going to avoid even trying to enforce the right thing here, you're really going to get really burned. It's unfortunate that so many people are used to the speed you get in the common situation for a while now with ext3 and cheap hard drives: all writes are cached unsafely, but the filesystem resists a few bad behaviors. Much of the struggle where people say this is so much slower, I won't put up with it and try to code around it is futile, and it's hard to separate out the attempts to find such optimizations from the legitimate complaints. Anyway, you're right to point out that the filesystem is not necessarily going to save anyone from some of the tricky rename situations even with the improvements made to delayed allocation. They've fixed some of the worst behavior of the earlier implementation, but there are still potential issues in that area it seems. -- Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extension Packaging
On Apr 25, 2011, at 5:49 AM, Robert Haas wrote: I think it's a bit awkward that we have to do it this way, though. The installed version of the extension at the SQL level won't match what the user thinks they've installed. Granted, it'll be in the ballpark (1.0 vs 1.0.3, for example) but that's not quite the same thing. I also note that we've moved PDQ from thinking that versions are opaque strings to having pretty specific ideas about how they are going to have to be assigned and managed to avoid maintainer insanity. That suggests to me that at a minimum we need some more documentation here. These are really great points. I knew I wasn't thrilled about this suggest, but wasn't sure why. Frankly, I think it will be really confusing to users who think they have FooBar 1.2.2 installed but see only 1.2 in the database. I don't think I would do that, personally. I'm much more inclined to have the same extension version everywhere I can. If the core wants to build some infrastructure around the meaning of versions, then it will make sense (especially if there's a way to see *both* versions). But if not, I frankly don't see the point. Best, David -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
Robert Haas robertmh...@gmail.com writes: On Mon, Apr 25, 2011 at 10:45 AM, Tom Lane t...@sss.pgh.pa.us wrote: But a much more significant issue is that I don't see a lot of point in branching until we are actually ready to start active 9.2 development. So unless you see this as a vehicle whereby committers get to start hacking 9.2 but nobody else does, there's no point in cutting a branch until shortly before a CommitFest opens. I'm not aware that we've set any dates for 9.2 CommitFests yet ... That doesn't strike me as a condition prerequisite for opening the tree. If anything, I'd say we ought to decide first when we'll be open for development (current question) and then schedule CommitFests around that. And I do think there is some value in having the tree open even if we haven't gotten the schedule quite hammered out yet, because even if we don't have any formal process in place to be working through the 9.2 queue, some people might choose to work on it anyway. You're ignoring the extremely real costs involved in an early branch, namely having to double-patch every bug fix we make during beta. (And no, my experiences with git cherry-pick are not so pleasant as to make me feel that that's a non-problem.) I really don't think that we should branch until we're willing to start doing 9.2 development in earnest. You're essentially saying that we should encourage committers to do some cowboy committing of whatever 9.2 stuff seems ready, and never mind the distributed costs that imposes on the rest of the project. I don't buy that. IOW, the decision process ought to be set 9.2 schedule - set CF dates - set branch date. You're attacking it from the wrong end. I'm inclined to suggest that we just go ahead and schedule five CommitFests, using the same schedule we have used for the last couple of releases, but with one more inserted at the front end: May 15, 2011 - June 14, 2011 July 15, 2011 - August 14, 2011 September 15, 2011 - October 14, 2011 November 15, 2011 - December 14, 2011 January 15, 2012 - February 14, 2012 Well, if you go with that, then I will personally refuse to have anything to do with the first CF, because I was intending to spend my non-bug-fix time during beta on reading the already committed but probably still pretty buggy stuff from 9.1 (SSI and SR in particular). I think a schedule like the above will guarantee that no beta testing gets done by the development community at all, which will be great for moving 9.2 along and terrible for the release quality of 9.1. I think the earliest we could start a CF without blowing off the beta process entirely is June. Maybe we could start CFs June 1, August 1, etc? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Patch for pg_upgrade to turn off autovacuum
Bruce Momjian wrote: Well, having seen no replies, I am going to apply the version of the patch in a few days that keeps the old vacuum-disable behavior for older releases, and uses the -b flag for newer ones by testing the catalog version, e.g.: snprintf(cmd, sizeof(cmd), SYSTEMQUOTE \%s/pg_ctl\ -l \%s\ -D \%s\ -o \-p %d %s\ start \%s\ 21 SYSTEMQUOTE, bindir, output_filename, datadir, port, (cluster-controldata.cat_ver = BINARY_UPGRADE_SERVER_FLAG_CAT_VER) ? -b : -c autovacuum=off -c autovacuum_freeze_max_age=20, log_opts.filename); I know people like that pg_upgrade doesn't care much about what version it is running on, but it is really the ability of pg_upgrade to ignore changes made to the server that is really why pg_upgrade is useful, and this change makes pg_upgrade even more immune to such changes. Applied. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
Aidan Van Dyk ai...@highrise.ca wrote: 2) Discipline of all new published commits being pgindent clean. Heck, I think it would be reasonable to require that patch submitters run it before creating their patches. If people merged in changes from the main repository and then ran pgindent, I don't think there would be much in the way of merge problems from it. Personally, once I had pgindent set up I didn't find running it any more onerous than running filterdiff to get things into context diff format. (That is, both seemed pretty trivial.) The problem is that getting it set up isn't yet trivial. This is all assuming that we fix that. -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extension Packaging
On Mon, Apr 25, 2011 at 12:00 PM, David E. Wheeler da...@kineticode.com wrote: These are really great points. I knew I wasn't thrilled about this suggest, but wasn't sure why. Frankly, I think it will be really confusing to users who think they have FooBar 1.2.2 installed but see only 1.2 in the database. I don't think I would do that, personally. I'm much more inclined to have the same extension version everywhere I can. Really, that means you just a sql function to your extension, somethign similary to uname -a, or rpm -qi, which includes something that is *forced* to change the postgresql catalog view of your extension every time you ship a new version (major, or patch), and then you get the exact version (and whatever else you include) for free every time you update ;-) The thing to remember is that the postgresql extensions are managing the *postgresql catalogs* view of things, even though the shared object used by postgresql to provide the particular catalog's requirements can be fixed. If your extension is almost exclusively a shared object, and the only catalog things are a couple of functions defined to point into the C code, there really isn't anything catalog-wise that you need to manage for upgrades. -- Aidan Van Dyk Create like a god, ai...@highrise.ca command like a king, http://www.highrise.ca/ work like a slave. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] offline consistency check and info on attributes
Excerpts from Tomas Vondra's message of dom abr 24 13:49:31 -0300 2011: Right now I do have a very simple tool that reads a given file and performs a lot of checks at the block level (as described in bufpage.h), and the next step should be validating basic structure of the tuples (lengths). And that's the point where I'm stuck right now - I'm thinking what might be the most elegant way to get info about attributes, without access to the pg_attribute catalog (the tool is intended for offline checks). Each tuple declares its length. You don't need to know each attribute's length to check that. Doing attribute-level checks is probably pointless without catalog access. I've figured out the catalog-to-file mapping (in relmapper.c), but now I'm wondering - it's just another relation, so I'd have to read the block, parse the items and interpret them (not sure how to do that without the pg_attribute data itself). So I wonder - what would be an elegant solution? This reminds me -- we need to have pg_filedump be able to dump the relmapper stuff. I was going to write a patch for it but then I forgot. -- Álvaro Herrera alvhe...@commandprompt.com The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extension Packaging
On Apr 25, 2011, at 9:14 AM, Aidan Van Dyk wrote: Really, that means you just a sql function to your extension, somethign similary to uname -a, or rpm -qi, which includes something that is *forced* to change the postgresql catalog view of your extension every time you ship a new version (major, or patch), and then you get the exact version (and whatever else you include) for free every time you update ;-) I think it's silly for every extension to have its own function that does this. Every one would have a different name and, perhaps, signature. The thing to remember is that the postgresql extensions are managing the *postgresql catalogs* view of things, even though the shared object used by postgresql to provide the particular catalog's requirements can be fixed. If your extension is almost exclusively a shared object, and the only catalog things are a couple of functions defined to point into the C code, there really isn't anything catalog-wise that you need to manage for upgrades. Most of my extensions will not be written in C (e.g., pgTAP, explanation). Best, David -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
A recent complaint in pgsql-novice revealed that if you have say hostsslall all 127.0.0.1/32md5 clientcert=1 in pg_hba.conf, but you forget to enable SSL in postgresql.conf, you get something like this: LOG: client certificates can only be checked if a root certificate store is available HINT: Make sure the root.crt file is present and readable. CONTEXT: line 82 of configuration file /home/tgl/version90/data/pg_hba.conf LOG: client certificates can only be checked if a root certificate store is available HINT: Make sure the root.crt file is present and readable. CONTEXT: line 84 of configuration file /home/tgl/version90/data/pg_hba.conf FATAL: could not load pg_hba.conf Needless to say, this is pretty unhelpful, especially if you actually do have a root.crt file. I'm inclined to think that the correct fix is to make parse_hba_line, where it first realizes the line is hostssl, check not only that SSL support is compiled but that it's turned on. Is it really sensible to allow hostssl lines in pg_hba.conf when SSL is turned off? At best they are no-ops, and at worst they're going to result in weird failures like this one. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
On Mon, Apr 25, 2011 at 12:03 PM, Tom Lane t...@sss.pgh.pa.us wrote: You're ignoring the extremely real costs involved in an early branch, namely having to double-patch every bug fix we make during beta. (And no, my experiences with git cherry-pick are not so pleasant as to make me feel that that's a non-problem.) I really don't think that we should branch until we're willing to start doing 9.2 development in earnest. You're essentially saying that we should encourage committers to do some cowboy committing of whatever 9.2 stuff seems ready, and never mind the distributed costs that imposes on the rest of the project. I don't buy that. IOW, the decision process ought to be set 9.2 schedule - set CF dates - set branch date. You're attacking it from the wrong end. I'm inclined to suggest that we just go ahead and schedule five CommitFests, using the same schedule we have used for the last couple of releases, but with one more inserted at the front end: May 15, 2011 - June 14, 2011 July 15, 2011 - August 14, 2011 September 15, 2011 - October 14, 2011 November 15, 2011 - December 14, 2011 January 15, 2012 - February 14, 2012 Well, if you go with that, then I will personally refuse to have anything to do with the first CF, because I was intending to spend my non-bug-fix time during beta on reading the already committed but probably still pretty buggy stuff from 9.1 (SSI and SR in particular). I think a schedule like the above will guarantee that no beta testing gets done by the development community at all, which will be great for moving 9.2 along and terrible for the release quality of 9.1. I think the earliest we could start a CF without blowing off the beta process entirely is June. Maybe we could start CFs June 1, August 1, etc? I can't object to taking another two weeks, especially since that would give people who may have been expecting a later branch more time to get their stuff into shape for CF1. One problem with that is that it would make the fourth CommitFest start on December 1st, which will tend to make that CommitFest pretty half-baked, due to the large number of PostgreSQL developers who observe Christmas. That seems particularly bad if we're planning to end the cycle at that point. Perhaps that would be a good time to employ Peter's idea for a short, one week CommitFest: CF #1: June 1-30 CF #2: August 1-31 CF #3: October 1-31 CF #4 (one week shortened CF): December 1-7 CF #5: January 1-31 That would give people another crack at getting feedback before the final push, right at the time of the release cycle when timely feedback becomes most important. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extension Packaging
On Mon, Apr 25, 2011 at 12:17 PM, David E. Wheeler da...@kineticode.com wrote: On Apr 25, 2011, at 9:14 AM, Aidan Van Dyk wrote: Really, that means you just a sql function to your extension, somethign similary to uname -a, or rpm -qi, which includes something that is *forced* to change the postgresql catalog view of your extension every time you ship a new version (major, or patch), and then you get the exact version (and whatever else you include) for free every time you update ;-) I think it's silly for every extension to have its own function that does this. Every one would have a different name and, perhaps, signature. +1. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
On Mon, Apr 25, 2011 at 12:52 PM, Tom Lane t...@sss.pgh.pa.us wrote: A recent complaint in pgsql-novice revealed that if you have say hostssl all all 127.0.0.1/32 md5 clientcert=1 in pg_hba.conf, but you forget to enable SSL in postgresql.conf, you get something like this: LOG: client certificates can only be checked if a root certificate store is available HINT: Make sure the root.crt file is present and readable. CONTEXT: line 82 of configuration file /home/tgl/version90/data/pg_hba.conf LOG: client certificates can only be checked if a root certificate store is available HINT: Make sure the root.crt file is present and readable. CONTEXT: line 84 of configuration file /home/tgl/version90/data/pg_hba.conf FATAL: could not load pg_hba.conf Needless to say, this is pretty unhelpful, especially if you actually do have a root.crt file. I'm inclined to think that the correct fix is to make parse_hba_line, where it first realizes the line is hostssl, check not only that SSL support is compiled but that it's turned on. Is it really sensible to allow hostssl lines in pg_hba.conf when SSL is turned off? At best they are no-ops, and at worst they're going to result in weird failures like this one. It's not clear to me what behavior you are proposing. Would we disregard the hostssl line or treat it as an error? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
Kevin Grittner kevin.gritt...@wicourts.gov writes: Aidan Van Dyk ai...@highrise.ca wrote: Of course, that all depends on: 1) pgindent being work everywhere, exactly the same 2) Discipline of all new published commits being pgindent clean. The problem is that getting it set up isn't yet trivial. This is all assuming that we fix that. Yeah, there is not much point in thinking about #2 until we have #1. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] make check in contrib
On mån, 2011-04-25 at 11:22 -0400, Tom Lane wrote: Robert Haas robertmh...@gmail.com writes: On Mon, Apr 25, 2011 at 10:21 AM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: On Sun, Apr 24, 2011 at 7:18 PM, Peter Eisentraut pete...@gmx.net wrote: I noticed again that make check in contrib doesn't work, so here is a patch to fix it. I think it might be more useful to have a check target that actually succeeds, even if it does nothing useful. That argument seems a bit irrelevant to the proposed patch. How so? The proposed patch is to fix it, not remove it. Surely that's more useful than a no-op target. The proposed patch will support make check for contrib modules, but not for external users of pgxs. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
On Mon, Apr 25, 2011 at 18:59, Robert Haas robertmh...@gmail.com wrote: On Mon, Apr 25, 2011 at 12:52 PM, Tom Lane t...@sss.pgh.pa.us wrote: A recent complaint in pgsql-novice revealed that if you have say hostssl all all 127.0.0.1/32 md5 clientcert=1 in pg_hba.conf, but you forget to enable SSL in postgresql.conf, you get something like this: LOG: client certificates can only be checked if a root certificate store is available HINT: Make sure the root.crt file is present and readable. CONTEXT: line 82 of configuration file /home/tgl/version90/data/pg_hba.conf LOG: client certificates can only be checked if a root certificate store is available HINT: Make sure the root.crt file is present and readable. CONTEXT: line 84 of configuration file /home/tgl/version90/data/pg_hba.conf FATAL: could not load pg_hba.conf Needless to say, this is pretty unhelpful, especially if you actually do have a root.crt file. I'm inclined to think that the correct fix is to make parse_hba_line, where it first realizes the line is hostssl, check not only that SSL support is compiled but that it's turned on. Is it really sensible to allow hostssl lines in pg_hba.conf when SSL is turned off? At best they are no-ops, and at worst they're going to result in weird failures like this one. It's not clear to me what behavior you are proposing. Would we disregard the hostssl line or treat it as an error? It would absolutely have to be treat it as an error. another option would be to throw a more specific warning at that place, and keep the rest of the code the same. We can't *ignore* hostssl rows in ssl=off mode, that would be an easy way for an admin to set up a system they thought was secure but isn't... -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
Robert Haas robertmh...@gmail.com writes: On Mon, Apr 25, 2011 at 12:52 PM, Tom Lane t...@sss.pgh.pa.us wrote: I'm inclined to think that the correct fix is to make parse_hba_line, where it first realizes the line is hostssl, check not only that SSL support is compiled but that it's turned on. It's not clear to me what behavior you are proposing. Would we disregard the hostssl line or treat it as an error? Sorry, I wasn't clear. I meant to throw an error. We already do throw an error if you put hostssl in pg_hba.conf when SSL support wasn't compiled at all. Why shouldn't we throw an error if it's compiled but not turned on? Or we could go in the direction of making hostssl lines be a silent no-op in both cases, but that doesn't seem like especially user-friendly design to me. We don't treat any other cases in pg_hba.conf comparably AFAIR. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] make check in contrib
Peter Eisentraut pete...@gmx.net writes: On mån, 2011-04-25 at 11:22 -0400, Tom Lane wrote: The proposed patch is to fix it, not remove it. Surely that's more useful than a no-op target. The proposed patch will support make check for contrib modules, but not for external users of pgxs. So what will happen if an external user tries it? What happens now? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
On Mon, Apr 25, 2011 at 1:04 PM, Tom Lane t...@sss.pgh.pa.us wrote: Kevin Grittner kevin.gritt...@wicourts.gov writes: Aidan Van Dyk ai...@highrise.ca wrote: Of course, that all depends on: 1) pgindent being work everywhere, exactly the same 2) Discipline of all new published commits being pgindent clean. The problem is that getting it set up isn't yet trivial. This is all assuming that we fix that. Yeah, there is not much point in thinking about #2 until we have #1. Would this be a good GSoC project (or has the deadline passed)? -- Thanks, David Blewett -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
On Mon, Apr 25, 2011 at 19:11, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: On Mon, Apr 25, 2011 at 12:52 PM, Tom Lane t...@sss.pgh.pa.us wrote: I'm inclined to think that the correct fix is to make parse_hba_line, where it first realizes the line is hostssl, check not only that SSL support is compiled but that it's turned on. It's not clear to me what behavior you are proposing. Would we disregard the hostssl line or treat it as an error? Sorry, I wasn't clear. I meant to throw an error. We already do throw an error if you put hostssl in pg_hba.conf when SSL support wasn't compiled at all. Why shouldn't we throw an error if it's compiled but not turned on? Or we could go in the direction of making hostssl lines be a silent no-op in both cases, but that doesn't seem like especially user-friendly design to me. We don't treat any other cases in pg_hba.conf comparably AFAIR. We need to be very careful about ignoring *anything* in pg_hba.conf, since it's security configuration. Doing it silently is even worse.. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
On 04/25/2011 01:12 PM, David Blewett wrote: On Mon, Apr 25, 2011 at 1:04 PM, Tom Lanet...@sss.pgh.pa.us wrote: Kevin Grittnerkevin.gritt...@wicourts.gov writes: Aidan Van Dykai...@highrise.ca wrote: Of course, that all depends on: 1) pgindent being work everywhere, exactly the same 2) Discipline of all new published commits being pgindent clean. The problem is that getting it set up isn't yet trivial. This is all assuming that we fix that. Yeah, there is not much point in thinking about #2 until we have #1. Would this be a good GSoC project (or has the deadline passed)? Greg Smith and I have done some work on it, and we're going to discuss it at pgCon. I don't think there's terribly far to go. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
Magnus Hagander mag...@hagander.net writes: On Mon, Apr 25, 2011 at 18:59, Robert Haas robertmh...@gmail.com wrote: It's not clear to me what behavior you are proposing. Would we disregard the hostssl line or treat it as an error? It would absolutely have to be treat it as an error. another option would be to throw a more specific warning at that place, and keep the rest of the code the same. We can't *ignore* hostssl rows in ssl=off mode, that would be an easy way for an admin to set up a system they thought was secure but isn't... No, I don't see that it's a security hole. What would happen if the line is ignored is you couldn't make connections with it. I think you are positing that it'd be a potential security problem if a connection attempt fell through that line and then succeeded with some later line that had less-desirable properties --- but if your pg_hba.conf contents are like that, you already have issues, because a non-SSL-enabled client is going to reach that later line anyway. Nonetheless, it's extremely confusing to the admin to ignore such a line, and that's not a good thing in any security-sensitive context. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
On Mon, Apr 25, 2011 at 1:11 PM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: On Mon, Apr 25, 2011 at 12:52 PM, Tom Lane t...@sss.pgh.pa.us wrote: I'm inclined to think that the correct fix is to make parse_hba_line, where it first realizes the line is hostssl, check not only that SSL support is compiled but that it's turned on. It's not clear to me what behavior you are proposing. Would we disregard the hostssl line or treat it as an error? Sorry, I wasn't clear. I meant to throw an error. We already do throw an error if you put hostssl in pg_hba.conf when SSL support wasn't compiled at all. Why shouldn't we throw an error if it's compiled but not turned on? OK, I think you're right. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
On Mon, Apr 25, 2011 at 19:18, Tom Lane t...@sss.pgh.pa.us wrote: Magnus Hagander mag...@hagander.net writes: On Mon, Apr 25, 2011 at 18:59, Robert Haas robertmh...@gmail.com wrote: It's not clear to me what behavior you are proposing. Would we disregard the hostssl line or treat it as an error? It would absolutely have to be treat it as an error. another option would be to throw a more specific warning at that place, and keep the rest of the code the same. We can't *ignore* hostssl rows in ssl=off mode, that would be an easy way for an admin to set up a system they thought was secure but isn't... No, I don't see that it's a security hole. What would happen if the line is ignored is you couldn't make connections with it. I think you are positing that it'd be a potential security problem if a connection attempt fell through that line and then succeeded with some later line that had less-desirable properties --- but if your pg_hba.conf contents are like that, you already have issues, because a non-SSL-enabled client is going to reach that later line anyway. Good point. Nonetheless, it's extremely confusing to the admin to ignore such a line, and that's not a good thing in any security-sensitive context. Yeah, better make any misconfiguration very clear - let's throw an error. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign table permissions and cloning
On Wed, Apr 20, 2011 at 11:08 AM, Robert Haas robertmh...@gmail.com wrote: On Wed, Apr 20, 2011 at 9:59 AM, Tom Lane t...@sss.pgh.pa.us wrote: Shigeru Hanada han...@metrosystems.co.jp writes: Attached patch implements along specifications below. It also includes documents and regression tests. Some of regression tests might be redundant and removable. 1) GRANT privilege [(column_list)] ON [TABLE] TO role also work for foreign tables as well as regular tables, if specified privilege was SELECT. This might seem little inconsistent but I feel natural to use this syntax for SELECT-able objects. Anyway, such usage can be disabled with trivial fix. It seems really seriously inconsistent to do that at the same time that you make other forms of GRANT treat foreign tables as a separate class of object. I think if they're going to be a separate class of object, they should be separate, full stop. Making them just mostly separate will confuse people no end. I agree. Hmm, it appears we had some pre-existing inconsistency here, because ALL TABLES IN schema currently includes views. That's weird, but it'll be even more weird if we adopt the approach suggested by this patch, which creates ALL FOREIGN TABLES IN schema but allows ALL TABLES IN schema to go on including views. Maybe there is an argument for having ALL {TABLES|VIEWS|FOREIGN TABLES} IN schema - or maybe there isn't - but having two out of the three of them doesn't do anything for me. For now I think we should go with the path of least resistance and just document that ALL TABLES IN schema now includes not only views but also foreign tables. Putting that together with the comments already made upthread, the only behavior changes I think we should make here are: - Add GRANT privilege [(column_list)] ON FOREIGN TABLE table TO role. - Require that the argument to GRANT privilege [(column_list)] ON TABLE TO role be an ordinary table, not a foreign table. That looks like enough to make foreign table handling consistent with what we're already doing. Barring objections, I'll go make that happen. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
Magnus Hagander mag...@hagander.net writes: Yeah, better make any misconfiguration very clear - let's throw an error. OK, so we need something like (untested) if (token[4] == 's')/* hostssl */ { #ifdef USE_SSL +if (!EnableSSL) +{ +ereport(LOG, +(errcode(ERRCODE_CONFIG_FILE_ERROR), + errmsg(hostssl requires SSL to be turned on), + errhint(Set ssl = on in postgresql.conf.), + errcontext(line %d of configuration file \%s\, +line_num, HbaFileName))); +return false; +} parsedline-conntype = ctHostSSL; #else ereport(LOG, (errcode(ERRCODE_CONFIG_FILE_ERROR), errmsg(hostssl not supported on this platform), errhint(Compile with --with-openssl to use SSL connections.), errcontext(line %d of configuration file \%s\, line_num, HbaFileName))); return false; #endif } While I'm looking at this, I notice that here (and in some other places in pg_hba.conf) we say not supported on this platform which seems rather bogus to me. It implies that it's not possible to have SSL support on the user's machine, which is most likely not the case. I'd be happier with not supported by this build of PostgreSQL or some such wording. Thoughts? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] make check in contrib
On mån, 2011-04-25 at 13:12 -0400, Tom Lane wrote: Peter Eisentraut pete...@gmx.net writes: On mån, 2011-04-25 at 11:22 -0400, Tom Lane wrote: The proposed patch is to fix it, not remove it. Surely that's more useful than a no-op target. The proposed patch will support make check for contrib modules, but not for external users of pgxs. So what will happen if an external user tries it? What happens now? Now: $ make check 'make check' is not supported. Do 'make install', then 'make installcheck' instead. make: *** [check] Error 1 If we removed that, then it would be: make: Nothing to be done for `check'. [exit 0] Hmm, I'm slightly surprised by the latter behavior, but it's the case that since check is a global phony target, if you don't provide commands for it, it will just do nothing and succeed. Since some people didn't like removing the hint about installcheck, I'd suggest just removing the exit 1, which should then be pretty consistent overall. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign table permissions and cloning
Robert Haas robertmh...@gmail.com writes: Hmm, it appears we had some pre-existing inconsistency here, because ALL TABLES IN schema currently includes views. That's weird, but it'll be even more weird if we adopt the approach suggested by this patch, which creates ALL FOREIGN TABLES IN schema but allows ALL TABLES IN schema to go on including views. Maybe there is an argument for having ALL {TABLES|VIEWS|FOREIGN TABLES} IN schema - or maybe there isn't - but having two out of the three of them doesn't do anything for me. Yeah, that's a fair point. Another issue is that eventually foreign tables will probably have some update capability, so designing GRANT on the assumption that only SELECT should be allowed is a mistake. In fact, I'd argue that GRANT ought not be enforcing such an assumption even today, especially if it leads to asymmetry there. Let somebody GRANT UPDATE if they want to --- there's no need to throw an error until the update operation is actually tried. Putting that together with the comments already made upthread, the only behavior changes I think we should make here are: - Add GRANT privilege [(column_list)] ON FOREIGN TABLE table TO role. - Require that the argument to GRANT privilege [(column_list)] ON TABLE TO role be an ordinary table, not a foreign table. I think this might be going in the wrong direction given the above thoughts. At the very least you're going to have to make sure the prohibition is easily reversible. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] offline consistency check and info on attributes
Dne 25.4.2011 18:16, Alvaro Herrera napsal(a): Excerpts from Tomas Vondra's message of dom abr 24 13:49:31 -0300 2011: Right now I do have a very simple tool that reads a given file and performs a lot of checks at the block level (as described in bufpage.h), and the next step should be validating basic structure of the tuples (lengths). And that's the point where I'm stuck right now - I'm thinking what might be the most elegant way to get info about attributes, without access to the pg_attribute catalog (the tool is intended for offline checks). Each tuple declares its length. You don't need to know each attribute's length to check that. Doing attribute-level checks is probably pointless without catalog access. Yes, I know the tuple length is in HeapTupleHeader (and I'm already checking that), but that does not allow to check lengths of the individual columns, especially those with varlena types. That's a very annoying type of corruption, because the queries that do not touch such columns seem to work fine, but once you attempt to access the corrupted column you'll get something like this: pg_dump: SQL command failed pg_dump: Error message from server: ERROR: invalid memory alloc request size 4294967293 So the ability to check where a the column lengths do not make sense (in this case it's a negative value) would be a nice thing. But without the access to pg_attribute this seems to be very difficult. Hmmm, maybe the idea to build it as an offline tool (to use it when the DB is not running) is not a good idea ... Tomas -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] make check in contrib
Peter Eisentraut pete...@gmx.net writes: Since some people didn't like removing the hint about installcheck, I'd suggest just removing the exit 1, which should then be pretty consistent overall. Works for me. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
On Mon, Apr 25, 2011 at 19:38, Tom Lane t...@sss.pgh.pa.us wrote: Magnus Hagander mag...@hagander.net writes: Yeah, better make any misconfiguration very clear - let's throw an error. OK, so we need something like (untested) if (token[4] == 's') /* hostssl */ { #ifdef USE_SSL + if (!EnableSSL) + { + ereport(LOG, + (errcode(ERRCODE_CONFIG_FILE_ERROR), + errmsg(hostssl requires SSL to be turned on), + errhint(Set ssl = on in postgresql.conf.), + errcontext(line %d of configuration file \%s\, + line_num, HbaFileName))); + return false; + } parsedline-conntype = ctHostSSL; #else ereport(LOG, (errcode(ERRCODE_CONFIG_FILE_ERROR), errmsg(hostssl not supported on this platform), errhint(Compile with --with-openssl to use SSL connections.), errcontext(line %d of configuration file \%s\, line_num, HbaFileName))); return false; #endif } Looks good to me. While I'm looking at this, I notice that here (and in some other places in pg_hba.conf) we say not supported on this platform which seems rather bogus to me. It implies that it's not possible to have SSL support on the user's machine, which is most likely not the case. I'd be happier with not supported by this build of PostgreSQL or some such wording. Thoughts? There seems to be a number of cases in libpq, and also in pg_locale.c that says just hat. But in guc.c, we say SSL is not supported by this build. If we change it, we should change it to the same (including whether of PostgreSQL is included). Refering to the build seems more logical, yes. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] SQLERRD and dump of variables
I have two separate ideas, but they are kind of connected, (1) Make the detailed error message available in SPs and not only the short error message (SQLERRM) When debugging errors in stored procedures, I often add an exception handler and print the values of declared variables to the log. Unfortunately, the original detailed error message is then lost, since the SQLERRM only contains the short message. The detailed error message contains valuable information and it would be good if it could be made accessible within the exception handler code. Example of detailed error message: Process 28420 waits for ShareLock on transaction 1421227628; blocked by process 20718. The SQLERRM in this case only contains deadlock detected. If you would add a EXCEPTION WHEN deadlock_detected to catch this error, it would be nice if this detailed error message could still be written to the log, in addition to your own customized message, containing the values of the declared variables you need to view. The detailed error message is available in edata-detail, while SQLERRM is in edata-message. Perhaps we could name it SQLERRD? (2) New log field showing current values of all declared variables Instead of using RAISE DEBUG or customizing error messages using exception handlers, such as, EXCEPTION WHEN deadlock_detected RAISE '% var_foo % var_bar %', SQLERRM, var_foo, var_bar USING ERRCODE = 'deadlock_detected'; It would be very convenient if you could enable a log setting to write all declared variables current values directly to the CSV log, for all errors, to avoid the need to manually edit stored procedures to write variable values to the log, which also means you have to wait again for the same error to occur again, which might never happen if you have unlucky. Instead of a new CSV log field, perhaps the setting when switch on could append the info to the already existing hint field? Example: hint: var_foo=12345 var_bar=67890 This would be of great help to faster track down errors. Thoughts? Best regards, Joel Jacobson
Re: [HACKERS] branching for 9.2devel
On mån, 2011-04-25 at 09:17 -0400, Robert Haas wrote: it'll be harder to organize reviewers (see esp. the note by Greg Smith in that regard), As far as I'm concerned, those who run the commit fests will have to work out how to best configure the commit fests. I have no strong feelings about my various suggestions; they were just ideas. Altogether, I feel that keeping it the same is probably the more acceptable option at the moment. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
Magnus Hagander mag...@hagander.net writes: On Mon, Apr 25, 2011 at 19:38, Tom Lane t...@sss.pgh.pa.us wrote: While I'm looking at this, I notice that here (and in some other places in pg_hba.conf) we say not supported on this platform which seems rather bogus to me. It implies that it's not possible to have SSL support on the user's machine, which is most likely not the case. I'd be happier with not supported by this build of PostgreSQL or some such wording. Thoughts? There seems to be a number of cases in libpq, and also in pg_locale.c that says just hat. But in guc.c, we say SSL is not supported by this build. If we change it, we should change it to the same (including whether of PostgreSQL is included). Refering to the build seems more logical, yes. Since there's already precedent for saying this build full stop, let's just go with that. I was already thinking that including the product name in translatable strings would cause issues for repackagers. Barring objections, I'll backpatch the added error check, but change the wording of the existing messages only in HEAD. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign table permissions and cloning
On Mon, Apr 25, 2011 at 1:45 PM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: Hmm, it appears we had some pre-existing inconsistency here, because ALL TABLES IN schema currently includes views. That's weird, but it'll be even more weird if we adopt the approach suggested by this patch, which creates ALL FOREIGN TABLES IN schema but allows ALL TABLES IN schema to go on including views. Maybe there is an argument for having ALL {TABLES|VIEWS|FOREIGN TABLES} IN schema - or maybe there isn't - but having two out of the three of them doesn't do anything for me. Yeah, that's a fair point. Another issue is that eventually foreign tables will probably have some update capability, so designing GRANT on the assumption that only SELECT should be allowed is a mistake. In fact, I'd argue that GRANT ought not be enforcing such an assumption even today, especially if it leads to asymmetry there. Let somebody GRANT UPDATE if they want to --- there's no need to throw an error until the update operation is actually tried. Putting that together with the comments already made upthread, the only behavior changes I think we should make here are: - Add GRANT privilege [(column_list)] ON FOREIGN TABLE table TO role. - Require that the argument to GRANT privilege [(column_list)] ON TABLE TO role be an ordinary table, not a foreign table. I think this might be going in the wrong direction given the above thoughts. At the very least you're going to have to make sure the prohibition is easily reversible. I'm not sure I quite understood what you were saying there, but I'm coming around to the view that this is already 100% consistent with the way views are handled: rhaas=# create view v as select 1; CREATE VIEW rhaas=# grant delete on v to bob; GRANT rhaas=# grant delete on table v to bob; GRANT If that works for a view, it also ought to work for a foreign table, which I think is what you were saying. So now I think this is just a documentation bug. Do you agree? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unlogged tables, persistent kind
The amount of data loss on a big table will be 1% of the data loss caused by truncating the whole table. If that 1% is random (not time/transaction related), usually you'd rather have an empty table. In other words: is a table that is not consistant with anything else in the db useful? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign table permissions and cloning
Robert Haas robertmh...@gmail.com writes: On Mon, Apr 25, 2011 at 1:45 PM, Tom Lane t...@sss.pgh.pa.us wrote: I'm not sure I quite understood what you were saying there, but I'm coming around to the view that this is already 100% consistent with the way views are handled: rhaas=# create view v as select 1; CREATE VIEW rhaas=# grant delete on v to bob; GRANT rhaas=# grant delete on table v to bob; GRANT If that works for a view, it also ought to work for a foreign table, which I think is what you were saying. Yeah, the existing precedent (not only for GRANT but for some other things like ALTER TABLE) is that a command that says TABLE is allowed to apply to other relation types if it makes sense to apply it. It's only when you name some other object type that we get picky about the relkind matching exactly. This is probably more historical than anything else, but it's the precedent and we shouldn't make foreign tables be the only thing not following the precedent. So now I think this is just a documentation bug. If the code already works like that for foreign tables, then no behavioral change is needed. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
On mån, 2011-04-25 at 13:11 -0400, Tom Lane wrote: Or we could go in the direction of making hostssl lines be a silent no-op in both cases, but that doesn't seem like especially user-friendly design to me. We don't treat any other cases in pg_hba.conf comparably AFAIR. We ignore local even if the system doesn't have Unix-domain sockets. We ignore IPvN entries even if listen_addresses doesn't contain any IPvN addresses (this could be considered equivalent to ssl = on/off). In my experience, it is best to ignore these things. You don't lose anything -- if you don't have SSL configured, no one is going to connect with SSL -- and at best you're going to annoy admins who want to configure systems consistently. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unlogged tables, persistent kind
On 2011-04-25 20:00, Leonardo Francalanci wrote: The amount of data loss on a big table will be 1% of the data loss caused by truncating the whole table. If that 1% is random (not time/transaction related), usually you'd rather have an empty table. In other words: is a table that is not consistant with anything else in the db useful? Depends on the application, if it serves for pure caching then it is fully acceptable and way better than dropping everything. -- Jesper
Re: [HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
On mån, 2011-04-25 at 19:12 +0200, Magnus Hagander wrote: We need to be very careful about ignoring *anything* in pg_hba.conf, since it's security configuration. Doing it silently is even worse. You're not really ignoring anything. It's just not going to be a match. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign table permissions and cloning
On mån, 2011-04-25 at 13:35 -0400, Robert Haas wrote: Hmm, it appears we had some pre-existing inconsistency here, because ALL TABLES IN schema currently includes views. Which makes sense because you use GRANT ... ON TABLE to grant privileges to views. That's weird, but it'll be even more weird if we adopt the approach suggested by this patch, which creates ALL FOREIGN TABLES IN schema but allows ALL TABLES IN schema to go on including views. Maybe there is an argument for having ALL {TABLES|VIEWS|FOREIGN TABLES} IN schema - or maybe there isn't - but having two out of the three of them doesn't do anything for me. For now I think we should go with the path of least resistance and just document that ALL TABLES IN schema now includes not only views but also foreign tables. Yes. Putting that together with the comments already made upthread, the only behavior changes I think we should make here are: - Add GRANT privilege [(column_list)] ON FOREIGN TABLE table TO role. - Require that the argument to GRANT privilege [(column_list)] ON TABLE TO role be an ordinary table, not a foreign table. But that would be contrary to the SQL standard. The current behavior is fine, AFAICT. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unlogged tables, persistent kind
Jesper Krogh jes...@krogh.cc wrote: On 2011-04-25 20:00, Leonardo Francalanci wrote: The amount of data loss on a big table will be 1% of the data loss caused by truncating the whole table. If that 1% is random (not time/transaction related), usually you'd rather have an empty table. In other words: is a table that is not consistant with anything else in the db useful? Depends on the application, if it serves for pure caching then it is fully acceptable and way better than dropping everything. I buy this *if* we can be sure we're not keeping information which is duplicated or mangled, and if we can avoid crashing the server to a panic because of broken pointers or other infelicities. I'm not sure that can't be done, but I don't think I've heard an explanation of how that could be accomplished, particularly without overhead which would wipe out the performance benefit of unlogged tables. (And without a performance benefit, what's the point?) -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unlogged tables, persistent kind
On Mon, Apr 25, 2011 at 2:03 PM, Jesper Krogh jes...@krogh.cc wrote: On 2011-04-25 20:00, Leonardo Francalanci wrote: The amount of data loss on a big table will be 1% of the data loss caused by truncating the whole table. If that 1% is random (not time/transaction related), usually you'd rather have an empty table. In other words: is a table that is not consistant with anything else in the db useful? Depends on the application, if it serves for pure caching then it is fully acceptable and way better than dropping everything. Whoah... When cacheing, the application already needs to be able to cope with the case where there's nothing in the cache. This means that if the cache gets truncated, it's reasonable to expect that the application won't get deranged - it already needs to cope with the case where data's not there and needs to get constructed. In contrast, if *wrong* data is in the cache, that could very well lead to wrong behavior on the part of the application. And there may not be any mechanism aside from cache truncation that will rectify that. It seems to me that it's a lot riskier to try to preserve contents of such tables than it is to truncate them. -- When confronted by a difficult problem, solve it by reducing it to the question, How would the Lone Ranger handle this? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign table permissions and cloning
On Mon, Apr 25, 2011 at 2:02 PM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: On Mon, Apr 25, 2011 at 1:45 PM, Tom Lane t...@sss.pgh.pa.us wrote: I'm not sure I quite understood what you were saying there, but I'm coming around to the view that this is already 100% consistent with the way views are handled: rhaas=# create view v as select 1; CREATE VIEW rhaas=# grant delete on v to bob; GRANT rhaas=# grant delete on table v to bob; GRANT If that works for a view, it also ought to work for a foreign table, which I think is what you were saying. Yeah, the existing precedent (not only for GRANT but for some other things like ALTER TABLE) is that a command that says TABLE is allowed to apply to other relation types if it makes sense to apply it. It's only when you name some other object type that we get picky about the relkind matching exactly. This is probably more historical than anything else, but it's the precedent and we shouldn't make foreign tables be the only thing not following the precedent. So now I think this is just a documentation bug. If the code already works like that for foreign tables, then no behavioral change is needed. OK, let's test that: rhaas=# create foreign data wrapper dummy; CREATE FOREIGN DATA WRAPPER rhaas=# create server s1 foreign data wrapper dummy; CREATE SERVER rhaas=# create foreign table ft (a int) server s1; CREATE FOREIGN TABLE rhaas=# grant delete on ft to bob; ERROR: foreign table ft only supports SELECT privileges rhaas=# grant delete on table ft to bob; ERROR: foreign table ft only supports SELECT privileges So, nope, not the same. Also for comparison: rhaas=# create sequence blarg; CREATE SEQUENCE rhaas=# grant delete on blarg to bob; WARNING: sequence blarg only supports USAGE, SELECT, and UPDATE privileges GRANT This appears to be because ExecGrant_Relation() has this: else if (pg_class_tuple-relkind == RELKIND_FOREIGN_TABLE) { if (this_privileges ~((AclMode) ACL_ALL_RIGHTS_FOREIGN_TABLE)) { ereport(ERROR, (errcode(ERRCODE_INVALID_GRANT_OPERATION), errmsg(foreign table \%s\ only supports SELECT privileges, NameStr(pg_class_tuple-relname; } } There's a similar stanza for sequences, but that one uses ereport(WARNING...) rather than ereport(ERROR...). We could either remove that stanza entirely (making foreign tables consistent with views) or change ERROR to WARNING (making it consistent with sequences). If we remove it entirely, then we'll presumably also want to remove this chunk further down: else if (pg_class_tuple-relkind == RELKIND_FOREIGN_TABLE this_privileges ~((AclMode) ACL_SELECT)) { /* Foreign tables have the same restriction as sequences. */ ereport(WARNING, (errcode(ERRCODE_INVALID_GRANT_OPERATION), errmsg(foreign table \%s\ only supports SELECT column privileges, NameStr(pg_class_tuple-relname; this_privileges = (AclMode) ACL_SELECT; } Thoughts? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unlogged tables, persistent kind
On Mon, Apr 25, 2011 at 7:00 PM, Leonardo Francalanci m_li...@yahoo.it wrote: The amount of data loss on a big table will be 1% of the data loss caused by truncating the whole table. If that 1% is random (not time/transaction related), usually you'd rather have an empty table. Why do you think it would be random? In other words: is a table that is not consistant with anything else in the db useful? That's too big a leap. Why would it suddenly be inconsistent with the rest of the database? Not good arguments. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
Peter Eisentraut pete...@gmx.net writes: On mån, 2011-04-25 at 13:11 -0400, Tom Lane wrote: Or we could go in the direction of making hostssl lines be a silent no-op in both cases, but that doesn't seem like especially user-friendly design to me. We don't treat any other cases in pg_hba.conf comparably AFAIR. We ignore local even if the system doesn't have Unix-domain sockets. We ignore IPvN entries even if listen_addresses doesn't contain any IPvN addresses (this could be considered equivalent to ssl = on/off). In my experience, it is best to ignore these things. You don't lose anything -- if you don't have SSL configured, no one is going to connect with SSL -- and at best you're going to annoy admins who want to configure systems consistently. Hmm, interesting point, but the problem is that issues like the current one are likely to continue to rear their heads if we try to promise that you can write pg_hba lines that aren't really supported on the current installation. And this immediate problem (clientcert=1 causing an unexpected failure) is far from the only thing that would have to be fixed to handle that. For instance, we throw error if you say authmethod = PAM without any PAM support ... should we try to change that so that the error doesn't happen if it's in a line that can't possibly match an incoming connection? I doubt it. In the particular case at hand, if someone is trying to use the same hostssl-containing pg_hba.conf across multiple systems, is it not reasonable to suppose that he should have SSL turned on in postgresql.conf on all those systems? If he doesn't, it's far more likely to be a configuration mistake that he'd appreciate being pointed out to him, instead of having to reverse-engineer why some of the systems aren't working like others. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] stored procedures
On tor, 2011-04-21 at 18:24 +0300, Peter Eisentraut wrote: So the topic of real stored procedures came up again. Meaning a function-like object that executes outside of a regular transaction, with the ability to start and stop SQL transactions itself. I would like to add a note about the SQL standard here. Some people have been using terminology that a function does this and a procedure does something else. Others have also mentioned the use of a CALL statement to invoke procedures. Both procedures (as in CREATE PROCEDURE etc.) and the CALL statement are specified by the SQL standard, and they make no mention of any supertransactional behavior or autonomous transactions for procedures. As far as I can tell, it's just a Pascal-like difference that functions return values and procedures don't. So procedure-like objects with a special transaction behavior will need a different syntax or a syntax addition. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unlogged tables, persistent kind
On Mon, Apr 25, 2011 at 1:42 PM, Robert Haas robertmh...@gmail.com wrote: On Mon, Apr 25, 2011 at 5:04 AM, Simon Riggs si...@2ndquadrant.com wrote: On Mon, Apr 25, 2011 at 8:14 AM, Robert Haas robertmh...@gmail.com wrote: On Apr 24, 2011, at 1:22 PM, Simon Riggs si...@2ndquadrant.com wrote: Unlogged tables are a good new feature. Thanks. I noticed Bruce had mentioned they were the equivalent of NoSQL, which I don't really accept. Me neither. I thought that was poorly said. Heap blocks would be zeroed if they were found to be damaged, following a crash. The problem is not so much the blocks that are damaged (e.g. half-written, torn page) but the ones that were never written at all. For example, read page A, read page B, update tuple on page A putting new version on page B, write one but not both of A and B out to the O/S, crash. Everything on disk is a valid page, but they are not coherent taken as a whole. It's normally XLOG replay that fixes this type of situation... Not really sure it matters what the cause of data loss is, does it? The zeroing of the blocks definitely causes data loss but the intention is to bring the table back to a consistent physical state, not to in any way repair the data loss. Right, but the trick is how you identify which blocks you need to zero. You used the word damaged, which to me implied that the block had been modified in some way but ended up with other than the expected contents, so that something like a CRC check might detect the problem. My point (as perhaps you already understand) is that you could easily have a situation where every block in the table passes a hypothetical block-level CRC check, but the table as a whole is still damaged because update chains aren't coherent. So you need some kind of mechanism for identifying which portions of the table you need to zero to get back to a guaranteed-coherent state. That sounds like progress. The current mechanism is truncate complete table. There are clearly other mechanisms that would not remove all data. Probably the common case would be for insert-only data. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
All, �I'm not aware that we've set any dates for 9.2 CommitFests yet ... I thought the idea of setting the initial CF for July 15th for 9.1 was that we would consistently have the first CF in July every year? As discussed at that time, there's value to our corporate-sponsored developers in knowing a regular annual cycle. As much as I'd like to start development early officially, I'm with Tom in being pessimistic about the bugs we're going to find in SSI, Collations and Synch Rep. Frankly, if you and Tom weren't so focused on fixing it, I'd be suggesting that we pull Collations from 9.1; there seems to be a *lot* of untested issues there still. I do think that we could bump the first CF up to July 1st, but I don't think sooner than that is realistic without harming beta testing ... and potentially delaying the release. Let's first demonstrate a track record in getting a final release out consistently by July, and if that works, maybe we can bump up the date. = Re: shorter CF cycle: this works based on the idea of a one strike for each patch. That has the benefit of pushing more of the fixing work onto the authors and having less of it on the committers: Not ready, fix X,Y,Z and resubmit. I think that doing thing that way might actually work. However, it will require us to change the CF process in several ways. I'll also point out that pushing fixing work back on the authors is something which committers could be doing *already* in the present structure. And that there's no requirement that our present CFs need to last for a month. The main issues with a monthly commit week are: 1) Triage: it's hard to go from first-time reviewer -- review -- committer in a week, so a lot of patches would get booted the next CF just due to time, and 2) availability: some patches can only be understood by certain committers, who are more likely to be gone for a week than a month, and 3) The CF tool, which is currently fairly manual when it comes to pushing a patch from one CF to the other. This is the easiest thing to fix. However, given all that, there would be some serious advantages to a monthly commit week: a) faster feedback to submitters, and b) more chances for a developer to fix their feature and try again, and c) more of an emphasis on having the submitter fix what's wrong based on advice, which * conserves scarce committer time, and * helps the submitters learn more and become better coders d) eliminates the annoying dead time in each CF, where for the last week of the CF only 2 extremely difficult patches are under review, and e) eliminates the stigma/trauma of having your stuff rejected because everyone's stuff will be rejected at least once before acceptance, and f) even allows us to punt on everything must be reviewed if nothing gets punted more than once. Overall, I think the advantages to a faster/shorter CF cycle outweigh the disadvantages enough to make it at least worth trying. I'm willing to run the first 1-week CF, as well as several of the others during the 9.2 cycle to try and make it work. I also have an idea for dealing with Problem 1: we actually have 2 weeks, a triage week and a commitfest week. During the Triage week, non-committer volunteers will go through the pending patches and flag stuff which is obviously either broken or ready. That way, by the time committers actually need to review stuff during CF week, the easy patches will have already been eliminated. Not only will this streamline processing of the patches, it'll help us train new reviewers by giving them a crack at the easy reviews before Tom/Robert/Heikki look at them. It may not work. I think it's worth trying though, and we can always revert to the present system if the 1-week CFs are impeding development or are accumulating a snowball of patch backlog. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] stored procedures
On 04/25/2011 02:18 PM, Peter Eisentraut wrote: On tor, 2011-04-21 at 18:24 +0300, Peter Eisentraut wrote: So the topic of real stored procedures came up again. Meaning a function-like object that executes outside of a regular transaction, with the ability to start and stop SQL transactions itself. I would like to add a note about the SQL standard here. Some people have been using terminology that a function does this and a procedure does something else. Others have also mentioned the use of a CALL statement to invoke procedures. Both procedures (as in CREATE PROCEDURE etc.) and the CALL statement are specified by the SQL standard, and they make no mention of any supertransactional behavior or autonomous transactions for procedures. As far as I can tell, it's just a Pascal-like difference that functions return values and procedures don't. So procedure-like objects with a special transaction behavior will need a different syntax or a syntax addition. The trouble is that people using at least some other databases call supertransactional program units stored procedures. Maybe we need a keyword to designate supertransactional behaviour, but if we call them anything but procedures there is likely to be endless confusion, ISTM, especially if we have something called a procedure which is never supertransactional. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign table permissions and cloning
Robert Haas robertmh...@gmail.com writes: ... There's a similar stanza for sequences, but that one uses ereport(WARNING...) rather than ereport(ERROR...). We could either remove that stanza entirely (making foreign tables consistent with views) or change ERROR to WARNING (making it consistent with sequences). Well, the relevant point here is that there's little or no likelihood that we'll ever care to support direct UPDATE on sequences. This is exactly not the case for foreign tables. So I would argue that GRANT should handle them like views; certainly not be even more strict than it is for sequences. IOW, yeah, let's drop these two checks. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] stored procedures
On Mon, Apr 25, 2011 at 1:18 PM, Peter Eisentraut pete...@gmx.net wrote: On tor, 2011-04-21 at 18:24 +0300, Peter Eisentraut wrote: So the topic of real stored procedures came up again. Meaning a function-like object that executes outside of a regular transaction, with the ability to start and stop SQL transactions itself. I would like to add a note about the SQL standard here. Some people have been using terminology that a function does this and a procedure does something else. Others have also mentioned the use of a CALL statement to invoke procedures. Both procedures (as in CREATE PROCEDURE etc.) and the CALL statement are specified by the SQL standard, and they make no mention of any supertransactional behavior or autonomous transactions for procedures. As far as I can tell, it's just a Pascal-like difference that functions return values and procedures don't. So procedure-like objects with a special transaction behavior will need a different syntax or a syntax addition. hm. does the sql standard prohibit the use of extra transactional features? are you sure it's not implied that any sql (including START TRANSACTION etc) is valid? meaning, unless otherwise specified, you should be able to do those things, and that our functions because they force one transaction operation are non-standard, not the other way around. merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unlogged tables, persistent kind
On Mon, Apr 25, 2011 at 2:21 PM, Simon Riggs si...@2ndquadrant.com wrote: Right, but the trick is how you identify which blocks you need to zero. You used the word damaged, which to me implied that the block had been modified in some way but ended up with other than the expected contents, so that something like a CRC check might detect the problem. My point (as perhaps you already understand) is that you could easily have a situation where every block in the table passes a hypothetical block-level CRC check, but the table as a whole is still damaged because update chains aren't coherent. So you need some kind of mechanism for identifying which portions of the table you need to zero to get back to a guaranteed-coherent state. That sounds like progress. The current mechanism is truncate complete table. There are clearly other mechanisms that would not remove all data. No doubt. Consider a block B. If the system crashes when block B is dirty either in the OS cache or shared_buffers, then you must zero B, or truncate it away. If it was clean in both places, however, it's good data and you can keep it. So you can imagine for example a scheme where imagine that the relation is divided into 8MB chunks, and we WAL-log the first operation after each checkpoint that touches a chunk. Replay zeroes the chunk, and we also invalidate all the indexes (the user must REINDEX to get them working again). I think that would be safe, and certainly the WAL-logging overhead would be far less than WAL-logging every change, since we'd need to emit only ~16 bytes of WAL for every 8MB written, rather than ~8MB of WAL for every 8MB written. It wouldn't allow some of the optimizations that the current unlogged tables can get away with only because they WAL-log exactly nothing - and selectively zeroing chunks of a large table might slow down startup quite a bit - but it might still be useful to someone. However, I think that the logged table, unlogged index idea is probably the most promising thing to think about doing first. It's easy to imagine all sorts of uses for that sort of thing even in cases where people can't afford to have any data get zeroed, and it would provide a convenient building block for something like the above if we eventually wanted to go that way. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] offline consistency check and info on attributes
Excerpts from Tomas Vondra's message of lun abr 25 14:50:18 -0300 2011: Yes, I know the tuple length is in HeapTupleHeader (and I'm already checking that), but that does not allow to check lengths of the individual columns, especially those with varlena types. That's a very annoying type of corruption, because the queries that do not touch such columns seem to work fine, but once you attempt to access the corrupted column you'll get something like this: pg_dump: SQL command failed pg_dump: Error message from server: ERROR: invalid memory alloc request size 4294967293 Yeah, I agree with this being less than ideal. However, as you conclude, I don't think it's really workable to check this without support from the running system. I wrote a dumb tool to attempt to detoast all varlena columns, capture exceptions and report them; see the code here: http://alvherre.livejournal.com/4404.html (You need to pass it a table name as a text parameter; that bit is crap, as it fails for funny names). Note that this assumes that there is a function length() for every varlena datatype in the table, which may not be true for some of them. -- Álvaro Herrera alvhe...@commandprompt.com The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
On mån, 2011-04-25 at 14:18 -0400, Tom Lane wrote: In the particular case at hand, if someone is trying to use the same hostssl-containing pg_hba.conf across multiple systems, is it not reasonable to suppose that he should have SSL turned on in postgresql.conf on all those systems? If he doesn't, it's far more likely to be a configuration mistake that he'd appreciate being pointed out to him, instead of having to reverse-engineer why some of the systems aren't working like others. I think, people use and configure PostgreSQL in all kinds of ways, so we shouldn't assume what they might be thinking. Especially if an artificial boundary has the single purpose of being helpful. If people want their configuration checked for sanity (by someone's definition), there might be logging or debugging options in order for that. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
On Mon, Apr 25, 2011 at 4:17 PM, Tom Lane t...@sss.pgh.pa.us wrote: No, not at all, because you're ignoring the common case of a series of dependent patches that are submitted in advance of the first one having been committed. Uh, true. To get to the point where we could do things that way, it would have to be the case that every developer could run pgindent locally and get the same results that the committer would get. Maybe we'll get there someday, and we should certainly try. But we're not nearly close enough to be considering changing policy on that basis. Fwiw I tried getting Gnu indent to work. I'm having a devil of a time figuring out how to get even remotely similar output. I can't even get -ncsb to work which means it puts *every* one-line comment into a block with the /* and */ delimiters on a line by themselves. And it does line-wrapping differently such that any lines longer than the limit are split at the *first* convenient place rather than the last which produces some, imho, strange looking lines. And it doesn't take a file for the list of typedefs. You have to provide each one as an argment on the command-line. I hacked the source to add the typedefs to the gperf hash it uses but if we have to patch it it rather defeats the point of even pondering switching. Afaict it hasn't seen development since 2008 so I don't get the impression it's any more of a live project than the NetBSD source. All in all even if they've fixed the things it used to mangle I don't see much point in switching from one moribund project we have to patch to another moribund project we have to patch, especially as it will mean patches won't backpatch as easily since the output will be quite different. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
Josh Berkus j...@agliodbs.com writes: As much as I'd like to start development early officially, I'm with Tom in being pessimistic about the bugs we're going to find in SSI, Collations and Synch Rep. Frankly, if you and Tom weren't so focused on fixing it, I'd be suggesting that we pull Collations from 9.1; there seems to be a *lot* of untested issues there still. If I had realized two months ago what poor shape the collations patch was in, I would have argued to pull it. But the work is done now; there's no reason not to keep it in. The cost is that I wasn't paying any attention to these other areas for those two months, and we can't get that back by pulling the feature. I do think that we could bump the first CF up to July 1st, but I don't think sooner than that is realistic without harming beta testing ... and potentially delaying the release. Let's first demonstrate a track record in getting a final release out consistently by July, and if that works, maybe we can bump up the date. The start-date-on-the-15th was an oddity anyway, and it cannot work well in November or December. +1 for putting the CFs back to starting on the 1st. Overall, I think the advantages to a faster/shorter CF cycle outweigh the disadvantages enough to make it at least worth trying. I'm willing to run the first 1-week CF, as well as several of the others during the 9.2 cycle to try and make it work. I think we could try this once or twice without committing to doing the whole 9.2 cycle that way. I also have an idea for dealing with Problem 1: we actually have 2 weeks, a triage week and a commitfest week. During the Triage week, non-committer volunteers will go through the pending patches and flag stuff which is obviously either broken or ready. That way, by the time committers actually need to review stuff during CF week, the easy patches will have already been eliminated. Not only will this streamline processing of the patches, it'll help us train new reviewers by giving them a crack at the easy reviews before Tom/Robert/Heikki look at them. We've sort of unofficially done that already, in that lately it seems the committers don't pay much attention to a new fest until several days in, when things start to reach ready for committer state. That behavior would definitely not work very well in 1-week CFs, so I agree that some kind of multi-stage design would be needed. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] stored procedures
On mån, 2011-04-25 at 13:34 -0500, Merlin Moncure wrote: hm. does the sql standard prohibit the use of extra transactional features? It doesn't prohibit anything. It just kindly requests that standard syntax has standard behavior. are you sure it's not implied that any sql (including START TRANSACTION etc) is valid? meaning, unless otherwise specified, you should be able to do those things, and that our functions because they force one transaction operation are non-standard, not the other way around. Syntactically, it appears to be allowed, and there's something about savepoint levels. So that might be something related. In any case, if we use standard syntax, that should be researched. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unlogged tables, persistent kind
Robert Haas robertmh...@gmail.com writes: However, I think that the logged table, unlogged index idea is probably the most promising thing to think about doing first. +1 for that --- it's clean, has a clear use-case, and would allow us to manage the current mess around hash indexes more cleanly. That is, hash indexes would always be treated as unlogged. (Or of course we could fix the lack of WAL logging for hash indexes, but I notice a lack of people stepping up to do that.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] stored procedures - use cases?
On tor, 2011-04-21 at 18:24 +0300, Peter Eisentraut wrote: So the topic of real stored procedures came up again. Meaning a function-like object that executes outside of a regular transaction, with the ability to start and stop SQL transactions itself. I would like to collect some specs on this feature. So does anyone have links to documentation of existing implementations, or their own spec writeup? A lot of people appear to have a very clear idea of this concept in their own head, so let's start collecting those. Another point, as there appear to be diverging camps about supertransactional stored procedures vs. autonomous transactions, what would be the actual use cases of any of these features? Let's collect some, so we can think of ways to make them work. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Unfriendly handling of pg_hba SSL options with SSL off
Peter Eisentraut pete...@gmx.net writes: On mån, 2011-04-25 at 14:18 -0400, Tom Lane wrote: In the particular case at hand, if someone is trying to use the same hostssl-containing pg_hba.conf across multiple systems, is it not reasonable to suppose that he should have SSL turned on in postgresql.conf on all those systems? If he doesn't, it's far more likely to be a configuration mistake that he'd appreciate being pointed out to him, instead of having to reverse-engineer why some of the systems aren't working like others. I think, people use and configure PostgreSQL in all kinds of ways, so we shouldn't assume what they might be thinking. Especially if an artificial boundary has the single purpose of being helpful. Well, it's not just to be helpful, it's to close off code paths that are never going to be sufficiently well-tested to not have bizarre failure modes. That helps both developers (who don't have to worry about testing/fixing such code paths) and users (who won't have to deal with the bizarre failure modes). But in any case, I think that the presence of a hostssl line in pg_hba.conf is pretty strong evidence that the admin intends to use SSL, so we should tell him about it if he's forgotten the other piece of setup he needs. If people want their configuration checked for sanity (by someone's definition), there might be logging or debugging options in order for that. If anyone else agrees with your viewpoint, maybe we could compromise on emitting a LOG message indicating that the hostssl line will be ignored due to SSL being turned off. But I think your approach penalizes people who make simple mistakes in order to lend marginal support to an entirely-hypothetical advanced use case. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
Greg Stark gsst...@mit.edu writes: Fwiw I tried getting Gnu indent to work. I'm having a devil of a time figuring out how to get even remotely similar output. ... And it doesn't take a file for the list of typedefs. You have to provide each one as an argment on the command-line. *Ouch*. Really? It's hard to believe that anyone would consider it remotely usable for more than toy-sized projects, if you have to list all the typedef names on the command line. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] stored procedures - use cases?
Peter Eisentraut pete...@gmx.net wrote: what would be the actual use cases of any of these features? Let's collect some, so we can think of ways to make them work. The two things which leap to mind for me are: (1) All the \d commands in psql should be implemented in SPs so that they are available from any client, through calling one SP equivalent to one \d command. The \d commands would be changed to call the SPs for releases recent enough to support this. Eventually psql would be free of worrying about which release contained which columns in which system tables, because it would just be passing the parameters in and displaying whatever results came back. I have used products which implemented something like this, and found it quite useful. (2) In certain types of loads -- in particular converting data from old systems into the database for a new system -- you need to load several tables in parallel, with queries among the tables which are being loaded. The ability to batch many DML statements into one transaction is important, to avoid excessive COMMIT overhead and related disk output; however, the ability to ANALYZE tables periodically is equally important, to prevent each access to an initially-empty table from being done as a table scan after it has millions of rows. VACUUM might become equally important if there are counts or totals being accumulated in some tables, or status columns are being updated, as rows are added to other tables. I've often had to do something like this during conversions. This could be handled in an external program (I've often done it in Java), but performance might be better if a stored procedure in PostgreSQL was able to keep SQL/MED streams of data open while committing and performing this maintenance every so many rows. -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
On 04/25/2011 03:30 PM, Tom Lane wrote: Greg Starkgsst...@mit.edu writes: Fwiw I tried getting Gnu indent to work. I'm having a devil of a time figuring out how to get even remotely similar output. ... And it doesn't take a file for the list of typedefs. You have to provide each one as an argment on the command-line. *Ouch*. Really? It's hard to believe that anyone would consider it remotely usable for more than toy-sized projects, if you have to list all the typedef names on the command line. Looks like BSD does the same. It's just that we hide it in pgindent: $INDENT -bad -bap -bc -bl -d0 -cdb -nce -nfc1 -di12 -i4 -l79 \ -lp -nip -npro -bbb $EXTRA_OPTS \ `egrep -v '^(FD_SET|date|interval|timestamp|ANY)$' $TYPEDEFS | sed -e '/^$/d' -e 's/.*/-T /'` I agree it's horrible. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] branching for 9.2devel
On Mon, Apr 25, 2011 at 2:26 PM, Josh Berkus j...@agliodbs.com wrote: I thought the idea of setting the initial CF for July 15th for 9.1 was that we would consistently have the first CF in July every year? As discussed at that time, there's value to our corporate-sponsored developers in knowing a regular annual cycle. Huh? We've never guaranteed anyone a regular annual cycle, and we've never had one. We agreed to use the same schedule for 9.1 as for 9.0; I don't remember anything more than that being discussed anywhere, ever. As much as I'd like to start development early officially, I'm with Tom in being pessimistic about the bugs we're going to find in SSI, Collations and Synch Rep. Frankly, if you and Tom weren't so focused on fixing it, I'd be suggesting that we pull Collations from 9.1; there seems to be a *lot* of untested issues there still. I do think that we could bump the first CF up to July 1st, but I don't think sooner than that is realistic without harming beta testing ... and potentially delaying the release. Let's first demonstrate a track record in getting a final release out consistently by July, and if that works, maybe we can bump up the date. I have no idea where you're coming up with this estimate. I also have an idea for dealing with Problem 1: we actually have 2 weeks, a triage week and a commitfest week. During the Triage week, non-committer volunteers will go through the pending patches and flag stuff which is obviously either broken or ready. That way, by the time committers actually need to review stuff during CF week, the easy patches will have already been eliminated. Not only will this streamline processing of the patches, it'll help us train new reviewers by giving them a crack at the easy reviews before Tom/Robert/Heikki look at them. This is basically admitting on its face that one week isn't long enough. One week of triage and one week of CommitFest is two weeks, and right there we've lost all of the supposed benefit of reducing the percentage of time we spend in CommitFest mode. Furthermore, it's imposing a rigid separation between triage and commit that seems to me to have no value. If a patch is ready to commit after 3 days, should we ignore it for 4 days and then go back and look at it? Or should we maybe just commit it while the thread is still fresh in someone's mind and move on? The current process allows for that and, well, it doesn't work perfectly, but defining more rigid process around the existing process does not seem likely to help. At the risk of getting a bit cranky, you haven't participated in a material way in any CommitFest we've had in well over a year. AFAICS, the first, last, and only time you are listed in the CommitFest application is as co-reviewer of a patch in July 2009, which means that the last time you really had a major roll in this process was during the 8.4 cycle. So I'm really rather suspicious that you know what's wrong with the process and how to fix it better than the people who are involved currently. I think we need here is more input from the people who are regularly submitting and reviewing patches, and those who have tried recently but been turned off by some aspect of the process. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers