Re: [HACKERS] How can I change patch status in CommitFest application?
"MauMau" writes: > I re-submitted a patch and added a comment on the page below. I chose > "patch" from the comment type drop-down box, but the patch status does not > change from "waiting on author". I expected the patch status would become > "needs review". No, adding a comment doesn't change anything else. > What should I do? Where should I refer to handle patch status correctly? Click on the "Edit Patch" link, then change the status. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] How can I change patch status in CommitFest application?
Hello, I re-submitted a patch and added a comment on the page below. I chose "patch" from the comment type drop-down box, but the patch status does not change from "waiting on author". I expected the patch status would become "needs review". Patch: Allow multiple Postgres clusters running on the same machine to distinguish themselves in the event log Edit Patch - Move To Another CommitFest - Delete Patch https://commitfest.postgresql.org/action/patch_view?id=562 What should I do? Where should I refer to handle patch status correctly? Regards MauMau -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] patch for distinguishing PG instances in event log v2
Hello, The attached file is a revised patch which reflects all review comments by Magnus in: http://archives.postgresql.org/pgsql-hackers/2011-07/msg00839.php I made sure the previous tests (both custom and default "PostgreSQL" event source) succeeded. I'm submitting this to the currently open CommitFest 2001-9 shortly. Please review it again. Regards MauMau multi_event_source_v2.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch: Allow \dd to show constraint comments
On Jul 15, 2011, at 3:48 PM, Josh Berkus wrote: > Josh, > >> Fair enough. If the pg_comments patch does go down in flames, I can >> circle back and patch up the rest of the holes in \dd. > > I am unable to figure out the status of the pg_comments patch from this > thread. What's going on with it? I'll review it in the next few days. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch: pg_comments system view
On Fri, Jul 15, 2011 at 7:01 PM, Josh Berkus wrote: > On 7/15/11 3:54 PM, Josh Kupershmidt wrote: >> So that's where the pg_comments patch stands, at least AIUI. Clear as >> mud yet? :) > > Sounds like "returned with feedback" to me. Yeah, that's fine for this CF (though I do welcome any suggestions/feedback about the problems which need to be fixed). I'll try to put together another version for the next CF soon. Josh -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_class.relistemp
On Jul 13, 2011, at 2:23 PM, David E. Wheeler wrote: > Wasn't newsysviews supposed to deal with these sorts of issues? Why were they > rejected? Unless they recently came up again and got rejected again; the original complaint was that some of their conventions didn't follow information_schema conventions. The community wanted that changed and that never happened. -- Jim C. Nasby, Database Architect j...@nasby.net 512.569.9461 (cell) http://jim.nasby.net -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI error messages
"Kevin Grittner" writes: > OK, after getting distracted by test failures caused by an unrelated > commit, I've confirmed that this passes my usual tests. I don't > know anything about the tools used for extracting the text for the > translators, so if that needs any corresponding adjustment, someone > will need to point me in the right direction or cover that part > separately. Well, the point is that this function *isn't* going to be known to the NLS code, so AFAICS no adjustments should be needed there. You did miss some places that ought to be updated (mumble sources.sgml mumble) but unless I hear objections to the whole idea, I'll fix and apply this tomorrow. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] proposal: a validator for configuration files
Alvaro Herrera writes: > Excerpts from Alexey Kluykin's message of jue jul 14 09:18:15 -0400 2011: >> This is happening because a check for total number of errors so far >> is happening only after coming across at least one non-recognized >> configuration option. What about adding one more check right after >> ParseConfigFile, so we can bail out early when overwhelmed with >> syntax errors? This would save a line in translation :). > Actually I think it would make sense to do it completely the other way > around: reset the number of errors to zero before starting to apply the > values. The rationale is that all the settings that made it past the > tokenisation are completely different lines for which the errors were > reported earlier. I'd like to re-open the discussion from here http://archives.postgresql.org/pgsql-hackers/2011-06/msg01741.php http://archives.postgresql.org/pgsql-hackers/2011-04/msg00330.php specifically about this concern: >>> Does that mean that some backends might currently choose to ignore an >>> updated config file wholesale on SIGUP (because some settings are invalid) >>> while others happily apply it? Meaning that they'll afterwards disagree >>> even on modified settings which *would* be valid for both backends? I complained about exactly that point back in April, but at the time people (quite reasonably) didn't want to touch the behavior. Now that we are entering a new development cycle, it's time to reconsider. Particularly so if we're considering a patch that touches the behavior already. I think that it might be sensible to have the following behavior: 1. Parse the file, where "parse" means collect all the name = value pairs. Bail out if we find any syntax errors at that level of detail. (With this patch, we could report some or all of the syntax errors first.) 2. Tentatively apply the new custom_variable_classes setting if any. 3. Check to see whether all the "name"s are valid. If not, report the ones that aren't and bail out. 4. Apply each "value". If some of them aren't valid, report that, but continue, and apply all the ones that are valid. We can expect that the postmaster and all backends will agree on the results of steps 1 through 3. They might differ as to the validity of individual values in step 4 (as per my example of a setting that depends on database_encoding), but we should never end up with a situation where a globally correct value is not globally applied. The original argument for the current behavior was to avoid applying settings from a thoroughly munged config file, but I think that the checks involved in steps 1-3 would be sufficient to reject files that had major problems. It's possible that step 1 is really sufficient to cover the issue, in which case we could drop the separate step-3 pass and just treat invalid GUC names as a reason to ignore the particular line rather than the whole file. That would make things simpler and faster, and maybe less surprising too. Comments? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch for 9.2: enhanced errors
Pavel Stehule writes: > I am sending a updated patch I looked over this patch a bit. I guess my main concern about it is that the set of items to be reported seems to have been made up on a whim. I think that we ought to follow the SQL standard, which has a pretty clearly defined set of additional information items --- look at the spec for the GET DIAGNOSTICS statement. (In SQL:2008, this is section 23.1 .) I don't feel that we need to implement every field the standard calls for, at least not right away, but we ought to have their list in mind. Conversely, implementing items that *aren't* listed in the spec has to meet a considerably higher bar than somebody just submitting a patch that does it. The standard information items that seem reasonable for us to implement in the near future are COLUMN_NAME CONSTRAINT_NAME CONSTRAINT_SCHEMA SCHEMA_NAME TABLE_NAME TRIGGER_NAME TRIGGER_SCHEMA So I'd like to see the patch revised to use this terminology. We probably also need to think a bit harder about the PG_DIAG_XXX code letters to be used --- we're already just about at the limit of what fields can have reasonably-mnemonic code letters, and not all of the above have obvious choices, let alone the rest of what's in the spec that we might someday want to implement. What assignment rule should we use when we can't choose a mnemonic letter? Some other specific comments on the patch follow: 1. It's way short in the documentation department. protocol.sgml certainly needs additions (see "Error and Notice Message Fields"), also libpq.sgml's discussion of PQresultErrorField(), also sources.sgml's "Reporting Errors Within the Server", and I'm not sure where else. 2. I think you could drop the tuple-descriptor changes, because they're only needed in service of an information item that is not found in the standard and doesn't seem very necessary. The standard says to report the name of the constraint, not what columns it involves. 3. errrel() is extremely poorly considered. The fact that it requires utils/relcache.h to be #included by elog.h (and therefore by *every* *single* *file* in the backend) is a symptom of that, but expecting elog.c to do catalog lookups is as bad or worse from a modularity standpoint. I think all the added elog functions should not take anything higher-level than a C string. 4. Actually, it would probably be a good idea to avoid inventing a new elog API function for each individual new information item; something along the lines of "erritem(PG_DIAG_WHATEVER, string_value)" would be more appropriate to cover the inevitable future expansions. 5. I don't think IndexRelationGetParentRelation is very appropriate either --- in the use cases you have, the parent table's OID is easily accessible, as is its namespace (which'll be the same as the index's) and so you could just have the callers do get_rel_name(tableoid). Doing a relcache open in an error reporting path seems like overkill. I'm going to mark this patch Returned With Feedback. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] isolation tests broken for other than 'read committed'
Excerpts from Kevin Grittner's message of vie jul 15 18:23:10 -0400 2011: > It's been a few days since I've run through my usual builds and > tests, and I just discovered that part of my routine was broken by > this commit: > > http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=846af54dd5a77dc02feeb5e34283608012cfb217 Sorry 'bout that. > The isolation tests are broken when run against a database with > default_transaction_isolation = 'repeatable read' or 'serializable'. > (Which is ironic, really.) > > Adding the attached files to src/test/isolation/expected/ causes > those stricter isolation levels to work in my tests so far, but I > get random failures in 'read committed' due to apparent randomness > in which process is chosen as the deadlock victim. I seem to > remember Noah mentioning this and a suggested fix, but the problem > in manifest in a current checkout of head. Yeah, the patch I committed from Noah should fix the issues in read committed. It hadn't crossed my mind that I need to manually set the level to serializable in order for the tests to be meaningful :-( Shouldn't we be running the tests three times with the different useful isolation levels? > Of course, another approach to this would be to set transaction > isolation level in the new tests. If we do that, we might want to > create tests at all three levels, for completeness. I think your approach of adding alternative expected outputs makes more sense. -- Álvaro Herrera The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is there a committer in the house?
Alvaro, > The "ready for committer" state does not mean that the committer can > grab the patch and apply it. Last time I checked, one was still > expected to review it and take full responsibility for any breakage > caused by it. You're absolutely correct. Which is why committer bandwidth is such a choke point. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] FOR KEY LOCK foreign keys
Excerpts from Noah Misch's message of mié jul 13 01:34:10 -0400 2011: > coypu failed during the run of the test due to a different session being > chosen > as the deadlock victim. We can now vary deadlock_timeout to prevent this; see > attached fklocks-tests-deadlock_timeout.patch. This also makes the tests much > faster on a default postgresql.conf. I applied your patch, thanks. I couldn't reproduce the failures without it, even running only the three new tests in a loop a few dozen times. > crake failed when it reported waiting on the first step of an existing > isolation > test ("two-ids.spec"). I will need to look into that further. Actually, there are four failures in tests other than the two fixed by your patch. These are: http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2011-07-12%2022:32:02 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=nightjar&dt=2011-07-14%2016:27:00 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pitta&dt=2011-07-15%2015:00:08 http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2011-07-15%2018:32:02 The last two are an identical failure in multiple-row-versions: *** *** 1,11 Parsed test spec with 4 sessions starting permutation: rx1 wx2 c2 wx3 ry3 wy4 rz4 c4 c3 wz1 c1 ! step rx1: SELECT * FROM t WHERE id = 100; id txt 100 - step wx2: UPDATE t SET txt = 'b' WHERE id = 100; step c2: COMMIT; step wx3: UPDATE t SET txt = 'c' WHERE id = 100; step ry3: SELECT * FROM t WHERE id = 50; --- 1,12 Parsed test spec with 4 sessions starting permutation: rx1 wx2 c2 wx3 ry3 wy4 rz4 c4 c3 wz1 c1 ! step rx1: SELECT * FROM t WHERE id = 100; ! step wx2: UPDATE t SET txt = 'b' WHERE id = 100; ! step rx1: <... completed> id txt 100 step c2: COMMIT; step wx3: UPDATE t SET txt = 'c' WHERE id = 100; step ry3: SELECT * FROM t WHERE id = 50; The other failure by crake in two-ids: *** *** 440,447 step c3: COMMIT; starting permutation: rxwy2 wx1 ry3 c2 c3 c1 ! step rxwy2: update D2 set id = (select id+1 from D1); step wx1: update D1 set id = id + 1; step ry3: select id from D2; id --- 440,448 step c3: COMMIT; starting permutation: rxwy2 wx1 ry3 c2 c3 c1 ! step rxwy2: update D2 set id = (select id+1 from D1); step wx1: update D1 set id = id + 1; + step rxwy2: <... completed> step ry3: select id from D2; id And the most problematic one, in nightjar, is a failure to send two async commands, which is not supported by the new code: --- 255,260 ERROR: could not serialize access due to read/write dependencies among transactions starting permutation: ry2 wx2 rx1 wy1 c2 c1 ! step ry2: SELECT count(*) FROM project WHERE project_manager = 1; ! failed to send query: another command is already in progress -- Álvaro Herrera The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch: pg_comments system view
On 7/15/11 3:54 PM, Josh Kupershmidt wrote: > So that's where the pg_comments patch stands, at least AIUI. Clear as > mud yet? :) Sounds like "returned with feedback" to me. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] patch: pg_comments system view
On Fri, Jul 15, 2011 at 4:48 PM, Josh Berkus wrote: > I am unable to figure out the status of the pg_comments patch from this > thread. What's going on with it? I don't blame you :-) I think this thread got so confusing because two separate topics were intertwined. (I'm going to try to change this thread subject to reflect the fact that we're really talking about pg_comments here now.) First, the original post, with a small patch to fix one of the many problems with psql's \dd command. That patch was rejected because it only plugged one of the many problems with \dd. I have since started another thread[1] with a plausible fix for \dd and several other backslash commands, so that we will have working displays of all comments with minimal duplication. Second, we have the pg_comments view (latest version is the "v10.WIP"). Despite its WIP tag, I think it is actually pretty close to being complete at this point. The first concern which I raised a concern in that thread[2]: > 1.) For now, I'm just ignoring the issue of visibility checks; is the only big issue I see as still outstanding. At first, I was assuming that \dd should naturally read from pg_comments to fetch the object comments it is interested in. But that would mean that we'd need some way to duplicate those "visibility checks" \dd was doing, either in \dd or in another "is_visible" column in pg_comments. I haven't tried either of those options out yet, but I was worried they'd both be tricky/ugly. Which leads me to think, maybe it's not so bad if \dd stays the way I've suggested in thread[1], i.e. just querying pg_[sh]description for the five object types it needs directly. After all, \dd will IMO be close to useless/deprecated once we have pg_comments; it'll be much easier to just query pg_comments for what you're looking for directly, and \dd will only display five funky object types, anyway. How do folks feel about this issue? The second concern I raised with the last pg_comments patch, > I think now might be a good time to > re-examine what types of objects are displayed by \dd. should be handled by thread[1], and the third concern is just about whitespace. Oh, and docs need some adjusting too, and it'd be nice if someone sanity checked my guesses for the "is_system" column (or if it's not needed, that's OK too). So that's where the pg_comments patch stands, at least AIUI. Clear as mud yet? :) Josh [1] http://archives.postgresql.org/pgsql-hackers/2011-07/msg00459.php [2] http://archives.postgresql.org/message-id/CAK3UJRGNwKq0c2VsSYV-Mg55Y_kvZE=8fmr_xt8rzp__1lo...@mail.gmail.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is there a committer in the house?
Excerpts from Josh Berkus's message of vie jul 15 18:33:14 -0400 2011: > > > http://wiki.postgresql.org/wiki/Committers > > Oh, thanks! I didn't know that existed. > > So, if any of the following people could possibly pick up even one patch > from the current commitfest and commit it, it would clear out our > pending commit list: The "ready for committer" state does not mean that the committer can grab the patch and apply it. Last time I checked, one was still expected to review it and take full responsibility for any breakage caused by it. Given this, I cannot responsibly grab more than a couple patches, because then I'd be swamped when bug reports started coming in. -- Álvaro Herrera The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI error messages
"Kevin Grittner" wrote: > Alvaro Herrera wrote: >> Excerpts from Tom Lane's message: >> >>> I think that Peter's real concern is whether these are worth >>> translating, and I share that doubt. Perhaps we should invent >>> an errdetail_internal, parallel to errmsg_internal, that works >>> like errdetail but doesn't treat the message as a candidate for >>> translation? >> >> Yeah. The other point is that translated versions of those >> phrases are likely to introduce randomly diverging terms, which >> is not going to help with debugging. As long as the technology >> is new enough that a user is going to need help from -hackers (or >> at least someone who read and grokked README.SSI) to debug a >> problem, there's no benefit to translating those errdetails. > > OK, since that seems to be the consensus, I put a patch together. > Testing it now. Will post once I've confirmed it doesn't break > anything. OK, after getting distracted by test failures caused by an unrelated commit, I've confirmed that this passes my usual tests. I don't know anything about the tools used for extracting the text for the translators, so if that needs any corresponding adjustment, someone will need to point me in the right direction or cover that part separately. The leading whitespace changes in predicate.c are from pgindent. -Kevin *** a/src/backend/storage/lmgr/predicate.c --- b/src/backend/storage/lmgr/predicate.c *** *** 3776,3782 CheckForSerializableConflictOut(bool visible, Relation relation, ereport(ERROR, (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE), errmsg("could not serialize access due to read/write dependencies among transactions"), !errdetail("Canceled on identification as a pivot, during conflict out checking."), errhint("The transaction might succeed if retried."))); } --- 3776,3782 ereport(ERROR, (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE), errmsg("could not serialize access due to read/write dependencies among transactions"), !errdetail_internal("Canceled on identification as a pivot, during conflict out checking."), errhint("The transaction might succeed if retried."))); } *** *** 3865,3871 CheckForSerializableConflictOut(bool visible, Relation relation, ereport(ERROR, (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE), errmsg("could not serialize access due to read/write dependencies among transactions"), ! errdetail("Canceled on conflict out to old pivot %u.", xid), errhint("The transaction might succeed if retried."))); if (SxactHasSummaryConflictIn(MySerializableXact) --- 3865,3871 ereport(ERROR, (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE), errmsg("could not serialize access due to read/write dependencies among transactions"), !errdetail_internal("Canceled on conflict out to old pivot %u.", xid), errhint("The transaction might succeed if retried."))); if (SxactHasSummaryConflictIn(MySerializableXact) *** *** 3873,3879 CheckForSerializableConflictOut(bool visible, Relation relation, ereport(ERROR, (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE), errmsg("could not serialize access due to read/write dependencies among transactions"), !errdetail("Canceled on identification as a pivot, with conflict out to old committed transaction %u.", xid), errhint("The transaction might succeed if retried."))); MySerializableXact->flags |= SXACT_FLAG_SUMMARY_CONFLICT_OUT; --- 3873,3879 ereport(ERROR, (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE), errmsg("could not serialize access due to read/write dependencies among transactions"), !errdetail_internal("Canceled on identification as a pivot, with conflict out to old committed transaction %u.", xid), errhint("The transaction might succeed if retried."))
Re: [HACKERS] proposal: a validator for configuration files
Excerpts from Alexey Kluykin's message of jue jul 14 09:18:15 -0400 2011: > > On Jul 14, 2011, at 4:38 AM, Alvaro Herrera wrote: > > >> On Jul14, 2011, at 01:38 , Alvaro Herrera wrote: > This is happening because a check for total number of errors so far is > happening only after coming across at least one non-recognized configuration > option. What about adding one more check right after ParseConfigFile, so we > can bail out early when overwhelmed with syntax errors? This would save a > line in translation :). Actually I think it would make sense to do it completely the other way around: reset the number of errors to zero before starting to apply the values. The rationale is that all the settings that made it past the tokenisation are completely different lines for which the errors were reported earlier. > > I know I'd feel more comfortable if you (and Alexey, and Selena) gave it > > another look :-) > > I have checked it here and don't see any more problems with it. Thanks. -- Álvaro Herrera The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is there a committer in the house?
> http://wiki.postgresql.org/wiki/Committers Oh, thanks! I didn't know that existed. So, if any of the following people could possibly pick up even one patch from the current commitfest and commit it, it would clear out our pending commit list: * Bruce Momjian * Tatsuo Ishii * Andrew Dunstan * Magnus Hagander * Greg Stark * Itagaki Takahiro ... noting that I have already heard from Simon, Joe, Alvaro, Tom, Robert, Peter and Heikki. I'm basing the above list on (a) some knowledge of which folks only seem to work on very specific areas of code, and (b) looking at pgsql-committers for the past 3 weeks. In any case, it seems like we have a pool of 13 people would could be committing general patches every commitfest. Not a huge number considering the number of patches we get, but more than I see actually committing stuff for most CFs. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch for distinguishing PG instances in event log
On Fri, Jul 15, 2011 at 15:03, MauMau wrote: > Magnus, > > Thank you for reviewing my patch. I'm going to modify the patch according to > your comments and re-submit it. Before that, I'd like to discuss some points > and get your agreement. Ok, please do. If you want to, you can work off my git branch at http://github.com/mhagander/postgres (branch multievent). > From: "Magnus Hagander" >> >> + >> + On Windows, you need to register an event source >> + and its library with the operating system in order >> + to make use of the eventlog option for >> + log_destination. >> + See for details. >> + >> >> * This part is not strictly correct - you don't *need* to do that, it >> just makes things look nicer, no? > > How about the following statement? Is it better to say "correctly" instead > of "cleanly"? > > -- > On Windows, when you use the eventlog option for log_destination, you need > to register an event sourceand its library with the operating system so that > the Windows Event Viewer can display event log messages cleanly. > -- Replace "need" with "should" and I'm happy with that. >> * Also, what is the use for set_eventlog_parameters()? It's just a >> string variable, it shuold work without it. > > Yes, exactly. I'll follow your modification. > >> * We these days avoid #ifdef'ing gucs just because they are not on >> that platform - so the list is consistent. The guc should be available >> on non-windows platforms as well. > > I see. When I saw syslog_ident not being #ifndef WIN32'ed, I thought that > was because syslog might be available on Cygwin or MinGW. Anyway, I'll take > your modification. Nope, it used to be #ifdefed on HAVE_SYSLOG, but we changed that a while ago for consistency. >> * The guc also needs to go in postgresql.conf.sample > > I missed this completely. > >> * Are we really allowed to call MessageBox in DlLRegisterService? >> Won't that break badly in cases like silent installs? > > It seems that the only way to notify the error is MessageBox. Printing to > stdout/stderr does not show any messages on the command prompt. I guess > that's why the original author of pgevent.c used MessageBox. oh, we're already using messagebox.. I must've been confused, I thought it was new. Heck, it might even be me who wrote that :O > We cannot know within the DLL if /s (silent) switch was specified to > regsvr32.exe. If we want to suppress MessageBox during silent installs, it > seems that we need to know the silent install by an environment variable or > so. That is: > > C:\> set PGSILENT=true > C:\> regsvr32.exe ... I think if we're not Ok with messagebox, then we should just write it to the eventlog. However, given that we have been using messagebox before and had no complaints, I should withdraw my complaint for now - keep using messagebox like the surrounding code. We can attack the potential issue of that in a separate patch later. >> * We never build in unicode mode, so all those checks are unnecessary. >> >> Attached is an updated patch, which doesn't work yet. I believe the >> changes to the backend are correct, but probably some of the cleanups >> and changes in the dll are incorrect, because I seem to be unable to >> register either the default or a custom handler so far. > > The cause of the problem you are encountering is here: > > + if (pszCmdLine && *pszCmdLine != '\0') > + strncpy(event_source, sizeof(event_source), pszCmdLine); > + else > + strcpy(event_source, "PostgreSQL"); > > DllInstall() always receives the /i argument in UTF-16. str... functions > like strncpy() cannot be used for handling UTF-16 strings. For example, when > you run "regsvr32.exe /i:abc ...", DllInstall's pszCmdLine value becomes > "a\0b\0c\0". So, strncpy() just copies "a". This is the reason why you > cannot register the custom event source. Oh, it gets it as UTF-16 even when not in unicode mode. I see - yeah, I didn't have time to read up on the calling conventions properly. > The reason why you cannot register the default event source (i.e. running > regsvr32 without /i) is that you memset()ed event_source in DllMain() and > set "PostgreSQL" to it in DllInstall(). DllInstall() is not called when /i > is not specified. So, event_source remains empty. > > To solve the problem, we need to use wcstombs_s() instead of strncpy(), and > initialize event_source to "PostgreSQL" when it is defined. Ok, seems reasonable. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] isolation tests broken for other than 'read committed'
It's been a few days since I've run through my usual builds and tests, and I just discovered that part of my routine was broken by this commit: http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=846af54dd5a77dc02feeb5e34283608012cfb217 The isolation tests are broken when run against a database with default_transaction_isolation = 'repeatable read' or 'serializable'. (Which is ironic, really.) Adding the attached files to src/test/isolation/expected/ causes those stricter isolation levels to work in my tests so far, but I get random failures in 'read committed' due to apparent randomness in which process is chosen as the deadlock victim. I seem to remember Noah mentioning this and a suggested fix, but the problem in manifest in a current checkout of head. Of course, another approach to this would be to set transaction isolation level in the new tests. If we do that, we might want to create tests at all three levels, for completeness. -Kevin fk-deadlock_1.out Description: Binary data fk-deadlock2_1.out Description: Binary data fk-deadlock2_2.out Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is there a committer in the house?
Josh Berkus writes: > You might notice that we don't publish a list of committers anywhere. > In fact, *I* don't have one. http://wiki.postgresql.org/wiki/Committers regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Patch Review: Collect frequency statistics and selectivity estimation for arrays
> Actually, not MCV slot is used but MCELEM. It is a different slot. ps_stats > view map both into same fileds. Yes, you're right. I am sorry about the error. > Surely, non-standard use of histogram slot > should be avoided. Agreed. Best, Nathan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is there a committer in the house?
>> Yeah, everybody's super-touchy this week. Must be the weather. > > Somehow blaming everyone else doesn't seem like the proper reaction. :-( Look, it wasn't meant to be a complete list, or even a representative one. That's why I tagged a "Bueller?" at the end. Even for those who don't get the reference, there should have been a "Who the hell is Bueller?" You might notice that we don't publish a list of committers anywhere. In fact, *I* don't have one. So when I want to say "hey, who could be doing committing and isn't currently?" I don't have any way to answer that question. Which is why I posted the e-mail. So, I reiterate: who do we have, as committers, who are capable of committing a fairly wide array of patches? This would be good information for every CF Manager to have, when things start getting stuck in "ready for committer". -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] libpq SSL with non-blocking sockets
Robert Haas writes: > On Fri, Jul 8, 2011 at 9:36 AM, Martin Pihlak wrote: >> On 07/03/2011 05:08 AM, Steve Singer wrote: >>> Since the original patch was submitted as a WIP patch and this version >>> wasn't sent until well into the commit fest I am not sure if it >>> qualifies for a committer during this commitfest or if it needs to wait >>> until the next one. >> If possible I would like the fix to be backported as well. This is >> quite a nasty bug and difficult to track down. Especially as the >> actual SSL error messages are masked by the "server closed the >> connection unexpectedly" message. > I would not be inclined to back-patch this straight away. I agree > it's an important fix, but I am a little worried about the chances of > breaking something else... then again, I don't have the only vote > here. I reviewed this patch a bit. I agree that we have a bug here that should be fixed and backported, but I don't think this patch is ready. Some problems: 1. The behavior under low-memory conditions is pretty intolerable. As coded, a malloc failure here: +conn->outBufSize = Max(16 * 1024, remaining); +conn->outBuffer = malloc(conn->outBufSize); +if (conn->outBuffer == NULL) +{ +printfPQExpBuffer(&conn->errorMessage, + "cannot allocate memory for output buffer\n"); +return -1; +} leaves libpq's internal state completely corrupt --- outBuffer is null, which will result in instant coredump on any future access, but that's just the tip of the iceberg because we've also lost buffered data and messed up the none-too-clearly-defined-anyway state of which pending data is in which buffer. In general, I don't think it's a smart idea to proceed by duplicating the buffer contents in this situation, particularly not if the most obvious way to cause the problem is to have a very large buffer :-(. I think the direction to move in ought to be to use the existing buffer as-is, and have pqCheckOutBufferSpace refuse to enlarge the buffer while we are in "write frozen" state. It should be OK to append data to the buffer, though, so long as we remember how much we're allowed to pass to SSL_write when we next try to write. 2. According to the OpenSSL man pages, *both* SSL_read and SSL_write are defined to need to be re-called with the identical arguments after a WANT_READ or WANT_WRITE return. This means that pqReadData() is also at risk for this type of bug. I believe it could only happen in code paths where we call pqReadData when there is already some data in the buffer: if the caller then consumes some of that data, the next call to pqReadData would try to compact the buffer contents before making the pqsecure_read call. I think we probably need a "read frozen" state in which we won't enlarge or compact the inBuffer until SSL_read succeeds. 3. As of 9.0, somebody's decided that the backend needs nonblock read semantics in some cases. I probably should have complained harder about that before it went in, but since it's in, the backend is at risk for this same type of issue in secure_read(). I will mark the patch Returned With Feedback. I think that we need to address all these issues before we consider applying it. If we weren't hard up against the end of the CommitFest I might have a go at it myself, but I can't justify the time right now. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Mysterious server crashes
Hello! Recently we have upgraded our debian system (sid), which has since started crashing mysteriously. We are still looking into that. It runs on 3ware RAID. Postgres package is 8.4.8-2. The database came back up apparently ok, except for indexes. Running reindex produces this error on one of the tables: ERROR: unexpected chunk number 1 (expected 0) for toast value 17539760 in pg_toast_16992 Same with select. I tried running reindex on toast table didn't help. Running: select * from pg_toast.pg_toast_16992 where chunk_id = 17539760; crashed postgres backend (and apparently the whole server). Is there anything we can/should do to fix the problem, besides restoring the whole database from backup? Thanks! Ziga -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is there a committer in the house?
Josh Berkus wrote: > Alvaro, > > > It seems that by mentioning some people but not all, you offended both > > the people you mentioned (at least some of them, because they are > > already actively helping) and those that you didn't (at least some of > > them, because they are already actively helping; those that are not > > completely inactive in the project, that is). > > Yeah, everybody's super-touchy this week. Must be the weather. Somehow blaming everyone else doesn't seem like the proper reaction. :-( -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is there a committer in the house?
Excerpts from Josh Berkus's message of vie jul 15 16:32:42 -0400 2011: > Alvaro, > > > It seems that by mentioning some people but not all, you offended both > > the people you mentioned (at least some of them, because they are > > already actively helping) and those that you didn't (at least some of > > them, because they are already actively helping; those that are not > > completely inactive in the project, that is). > > Yeah, everybody's super-touchy this week. Must be the weather. It's been regularly horrible here, so yeah, that might explain it. > Speaking of which, is there anything you could commit? ;-) I intend to give the finishing touches to Pavel's plpgsql patch and commit, so that's one item to take off the list. Everything else seems to be in somebody else's hands, or is larger than I can grab ATM. -- Álvaro Herrera The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI error messages
Alvaro Herrera wrote: > Excerpts from Tom Lane's message: > >> I think that Peter's real concern is whether these are worth >> translating, and I share that doubt. Perhaps we should invent an >> errdetail_internal, parallel to errmsg_internal, that works like >> errdetail but doesn't treat the message as a candidate for >> translation? > > Yeah. The other point is that translated versions of those > phrases are likely to introduce randomly diverging terms, which is > not going to help with debugging. As long as the technology is > new enough that a user is going to need help from -hackers (or at > least someone who read and grokked README.SSI) to debug a problem, > there's no benefit to translating those errdetails. OK, since that seems to be the consensus, I put a patch together. Testing it now. Will post once I've confirmed it doesn't break anything. -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Commitfest Status: Sudden Death Overtime
All, As you can probably tell, we are not ready to end the commitfest. (I think Robert gave me this CF to show why even talking about a one-week mini-fest is a fantasy. If so, successful.). I've booted the patches which obviously aren't going to be immediately ready. Nine patches are ready for committer. The rest fall into the following buckets: KAGAI's PATCHES These are complex and need review by advanced hackers. They are also often interdependant. As such, I'd like to automatically move his patches to the next CF and ask that hackers who are engaged in working on them continue to do so between CFs. PATCHES WHICH I MOVED TO THE NEXT CF 9-2011: * Collect frequency statistics and selectivity estimation for arrays * Allow multiple Postgres clusters running on the same machine to distinguish themselves in the event log * lazy vxid locks PATCHES WHICH HAVE BEEN UPDATED AND NEED FURTHER REVIEW: * Cascade Replication PATCHES STILL UNDER ACTIVE DISCUSSION: * Add ability to constrain backend temporary file space PATCHES I CAN'T FIGURE OUT THE STATUS OF: * pg_comments system view * Bugfix for XPATH() if expression returns a scalar value So, down to a fairly limited list, except that we need a lot of committing. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch: Allow \dd to show constraint comments
Josh, > Fair enough. If the pg_comments patch does go down in flames, I can > circle back and patch up the rest of the holes in \dd. I am unable to figure out the status of the pg_comments patch from this thread. What's going on with it? -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] lazy vxid locks, v2
Robert, I should be able to do some performance testing on this, but not today. Questions: (1) can you re-link me to the pgbench and sysbench setup you used to test this originally? I'd like to implement those. (2) the max machine I can test these on is 16 cores. Is that adequate, or do we need more cores for real testing? -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is there a committer in the house?
Alvaro, > It seems that by mentioning some people but not all, you offended both > the people you mentioned (at least some of them, because they are > already actively helping) and those that you didn't (at least some of > them, because they are already actively helping; those that are not > completely inactive in the project, that is). Yeah, everybody's super-touchy this week. Must be the weather. Speaking of which, is there anything you could commit? ;-) -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: Initial Review: JSON contrib modul was: Re: [HACKERS] Another swing at JSON
On Mon, Jul 4, 2011 at 10:22 PM, Joseph Adams wrote: > I'll try to submit a revised patch within the next couple days. Sorry this is later than I said. I addressed the issues covered in the review. I also fixed a bug where "\u0022" would become """, which is invalid JSON, causing an assertion failure. However, I want to put this back into WIP for a number of reasons: * The current code accepts invalid surrogate pairs (e.g. "\uD800\uD800"). The problem with accepting them is that it would be inconsistent with PostgreSQL's Unicode support, and with the Unicode standard itself. In my opinion: as long as the server encoding is universal (i.e. UTF-8), decoding a JSON-encoded string should not fail (barring data corruption and resource limitations). * I'd like to go ahead with the parser rewrite I mentioned earlier. The new parser will be able to construct a parse tree when needed, and it won't use those overkill parsing macros. * I recently learned that not all supported server encodings can be converted to Unicode losslessly. The current code, on output, converts non-ASCII characters to Unicode escapes under some circumstances (see the comment above json_need_to_escape_unicode). I'm having a really hard time figuring out how the JSON module should handle non-Unicode character sets. \u escapes in JSON literals can be used to encode characters not available in the server encoding. On the other hand, the server encoding can encode characters not present in Unicode (see the third bullet point above). This means JSON normalization and comparison (along with member lookup) are not possible in general. Even if I assume server -> UTF-8 -> server transcoding is lossless, the situation is still ugly. Normalization could be implemented a few ways: * Convert from server encoding to UTF-8, and convert \u escapes to UTF-8 characters. This is space-efficient, but the resulting text would not be compatible with the server encoding (which may or may not matter). * Convert from server encoding to UTF-8, and convert all non-ASCII characters to \u escapes, resulting in pure ASCII. This bloats the text by a factor of three, in the worst case. * Convert \u escapes to characters in the server encoding, but only where possible. This would be extremely inefficient. The parse tree (for functions that need it) will need to store JSON member names and strings either in UTF-8 or in normalized JSON (which could be the same thing). Help and advice would be appreciated. Thanks! - Joey json-contrib-rev1-20110714.patch.gz Description: GNU Zip compressed data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI error messages
Excerpts from Tom Lane's message of vie jul 15 14:33:34 -0400 2011: > I think that Peter's real concern is whether these are worth > translating, and I share that doubt. Perhaps we should invent an > errdetail_internal, parallel to errmsg_internal, that works like > errdetail but doesn't treat the message as a candidate for translation? Yeah. The other point is that translated versions of those phrases are likely to introduce randomly diverging terms, which is not going to help with debugging. As long as the technology is new enough that a user is going to need help from -hackers (or at least someone who read and grokked README.SSI) to debug a problem, there's no benefit to translating those errdetails. -- Álvaro Herrera The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Three patches which desperately need reviewers
On Thu, Jul 14, 2011 at 01:42, Josh Berkus wrote: > Allow multiple Postgres clusters running on the same machine to > distinguish themselves in the event log > https://commitfest.postgresql.org/action/patch_view?id=562 I've reviewed this now, but I won't have time to take it across the finishing line in time for this CF. Regrettable, since the author came back with a response really quick. I propose we bump it to the next CF and I'll try to work on it before we even enter into that CF - unless somebody else feels comfortable taking it all the way. But given that it's been pending so long, I doubt that, but I'l be happy to deal with it in-between. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Three patches which desperately need reviewers
Yeb Havinga writes: > On 2011-07-14 02:42, Josh Berkus wrote: >> The first two are difficult patches, but the other two are not. Please >> volunteer to give these patches a review; we owe it to our contributors >> to review everything before the end of the CF. > When is the end of the CF? (I'm strongly suspecting today, but haven't > been able to find it with a quick search) The scheduled end date is today, but we're obviously not ready. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Three patches which desperately need reviewers
On 2011-07-14 02:42, Josh Berkus wrote: The first two are difficult patches, but the other two are not. Please volunteer to give these patches a review; we owe it to our contributors to review everything before the end of the CF. When is the end of the CF? (I'm strongly suspecting today, but haven't been able to find it with a quick search) regards, Yeb -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI error messages
Robert Haas writes: > On Jul 15, 2011, at 12:06 PM, "Kevin Grittner" > wrote: >> I have a suspicion that we might sometimes find the information >> conveyed by the detail useful when responding to users with >> questions; but the language as it stands seems confusing for users. > I think removing info here or making it harder to get would be a mistake. Agreed. If we can clarify the messages, and/or make sure the terminology they use is documented, that'd be a good thing; but let's not just remove them. I think that Peter's real concern is whether these are worth translating, and I share that doubt. Perhaps we should invent an errdetail_internal, parallel to errmsg_internal, that works like errdetail but doesn't treat the message as a candidate for translation? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI error messages
On Jul 15, 2011, at 12:06 PM, "Kevin Grittner" wrote: > I have a suspicion that we might sometimes find the information > conveyed by the detail useful when responding to users with > questions; but the language as it stands seems confusing for users. I think removing info here or making it harder to get would be a mistake. SSI is a complicated technology and we are likely to find that we need more debugging and instrumentation, not less. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reduced power consumption in WAL Writer process
Robert Haas writes: > On Jul 15, 2011, at 8:55 AM, Simon Riggs wrote: >> The only difference is how bulk write operations are handled. As long >> as we wake WALWriter before wal_buffers fills then we'll be good. >> Wakeup once per wal buffer is too much. I agree we should measure this >> to check how frequently wakeups are required for bulk ops. > Yeah. The trick is to get the wake-ups to be frequent enough without adding > too much latency to the backends that have to perform them. Off-hand, I don't > have a good feeling for how hard that will be. I'd say send the signal when wal buffers are more than X% full (maybe half). The suggestion to send it when wrapping around at the end of the array is not quite right, because that's an arbitrary condition that's not related to how much work there is for the walwriter to do. It should be cheap to check for this while advancing to a new wal buffer. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Need help understanding pg_locks
Thanks, applied. --- Florian Pflug wrote: > On Jul14, 2011, at 22:18 , Bruce Momjian wrote: > > !OID of the database in which the lock target exists, or > > !zero if the lock is a shared object, or > > !null if the lock is on a transaction ID > > For consistency, I think it should say "target" in the second part > of the sentence also now, instead of "lock ... on". > > Updated patch attached. I tried to make the descriptions a > bit more consistent, replaced "object" by "target", and > added "targeted by" after the phrase which describes the > locked (or waited-for) object. > > best regards, > Florian Pflug > > diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml > index d4a1d36..33be5d0 100644 > *** a/doc/src/sgml/catalogs.sgml > --- b/doc/src/sgml/catalogs.sgml > *** > *** 6928,6936 > oid > linkend="catalog-pg-database">pg_database.oid > > !OID of the database in which the object exists, or > !zero if the object is a shared object, or > !null if the object is a transaction ID > > > > --- 6928,6936 > oid > linkend="catalog-pg-database">pg_database.oid > > !OID of the database in which the lock target exists, or > !zero if the target is a shared object, or > !null if the target is a transaction ID > > > > *** > *** 6938,6944 > oid > linkend="catalog-pg-class">pg_class.oid > > !OID of the relation, or null if the object is not > a relation or part of a relation > > > --- 6938,6944 > oid > linkend="catalog-pg-class">pg_class.oid > > !OID of the relation targeted by the lock, or null if the target is > not > a relation or part of a relation > > > *** > *** 6947,6954 > integer > > > !Page number within the relation, or null if the object > !is not a tuple or relation page > > > > --- 6947,6954 > integer > > > !Page number targeted by the lock within the relation, > !or null if the target is not a relation page or tuple > > > > *** > *** 6956,6962 > smallint > > > !Tuple number within the page, or null if the object is not a tuple > > > > --- 6956,6963 > smallint > > > !Tuple number targeted by the lock within the page, > !or null if the target is not a tuple > > > > *** > *** 6964,6971 > text > > > !Virtual ID of a transaction, or null if the object is not a > !virtual transaction ID > > > > --- 6965,6972 > text > > > !Virtual ID of the transaction targeted by the lock, > !or null if the target is not a virtual transaction ID > > > > *** > *** 6973,6979 > xid > > > !ID of a transaction, or null if the object is not a transaction ID > > > > --- 6974,6981 > xid > > > !ID of the transaction targeted by the lock, > !or null if the target is not a transaction ID > > > > *** > *** 6981,6988 > oid > linkend="catalog-pg-class">pg_class.oid > > !OID of the system catalog containing the object, or null if the > !object is not a general database object > > > > --- 6983,6990 > oid > linkend="catalog-pg-class">pg_class.oid > > !OID of the system catalog containing the lock target, or null if the > !target is not a general database object > > > > *** > *** 6990,6997 > oid > any OID column > > !OID of the object within its system catalog, or null if the > !object is not a general database object. > For advisory locks it is used to distinguish the two key > spaces (1 for an int8 key, 2 for two int4 keys). > > --- 6992,6999 > oid > any OID column > > !OID of the lock target within its system catalog, or null if the > !target is not a general database object. > For advisory locks it is used to distinguish the two key > spaces (1 for an int8 key, 2 for two int4 keys). > > *** > *** 7001,7010 > smallint > > > !For a table column, this is the column n
Re: [HACKERS] Reduced power consumption in WAL Writer process
On Jul 15, 2011, at 8:55 AM, Simon Riggs wrote: > On Fri, Jul 15, 2011 at 2:29 PM, Robert Haas wrote: > >> If the primary goal here is to reduce power consumption, another option >> would be to keep the regular wake-ups most of the time but have some >> mechanism for putting the process to sleep until wakened when no activity >> happens for a certain period of time - say, 10 cycles. I'm not at all sure >> that's better, but it would be less of a change to the existing behavior. > > Now we have them, latches seem the best approach because they (mostly) > avoid heuristics. That's my feeling as well. > This proposal works same or better for async transactions. Right. I would say probably better. The potential for a reduction in latency here is very appealing. > The only difference is how bulk write operations are handled. As long > as we wake WALWriter before wal_buffers fills then we'll be good. > Wakeup once per wal buffer is too much. I agree we should measure this > to check how frequently wakeups are required for bulk ops. Yeah. The trick is to get the wake-ups to be frequent enough without adding too much latency to the backends that have to perform them. Off-hand, I don't have a good feeling for how hard that will be. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI error messages
Peter Eisentraut wrote: > Some of these new error messages from the SSI code are a mouthful: > > not enough elements in RWConflictPool to record a rw-conflict > not enough elements in RWConflictPool to record a potential > rw-conflict > > These are basically "out of shared memory" conditions that could > be moved to a DETAIL message. Yes and no. The resource which is exhausted at this point *is* in shared memory, but its allocation of space is fixed at startup -- it's not competing with other users of shared memory for the shared memory "slush space". I have some concern that an "out of shared memory" message might confuse people on that aspect of the issue. That said, if people find it helps more than it hurts, I have no objection to a wording change. > Canceled on identification as a pivot, during conflict out > checking. > Canceled on conflict out to old pivot %u. > Canceled on identification as a pivot, with conflict out to > old committed transaction %u. > Canceled on conflict out to old pivot. > Canceled on identification as a pivot, during conflict in > checking. > Canceled on identification as a pivot, during write. > Canceled on conflict out to pivot %u, during read. > Canceled on identification as a pivot, during commit attempt. > Canceled on commit attempt with conflict in from prepared > pivot. > > These are DETAIL messages, with the rest of the report saying > > ERROR: could not serialize access due to read/write > dependencies among transactions > HINT: The transaction might succeed if retried. > > AFAICT, the documentation doesn't mention anything about this > "pivot" that keeps coming up. Good point. It's in the README-SSI and the Wiki page, but not in the user docs; so using it in the error detail seems likely to confuse. Also, some of the information is referring to phases of processing which can only be understood by looking at the source code, like "during conflict out checking." While it might not be entirely unreasonable to document what a pivot is, documenting some of the other language is really out of the question. > Is it useful that have the user face this information? Is there > anything a user can derive from seeing one of these messages > versus another, and in addition to the error and hint, that would > help them address the situation? A user with a firm grasp of SSI might be better able to figure out mitigation techniques for frequently occurring rollback scenarios with that detail. We did have one thread where someone was hoping for as much information as we could provide, but I don't know how typical that will be: http://archives.postgresql.org/pgsql-hackers/2010-09/msg01133.php I find the differentiation of causes helpful when working through test cases, but moving this information to DEBUG messages might satisfy that need. I've also wondered whether that HINT line is worthwhile. I have a suspicion that we might sometimes find the information conveyed by the detail useful when responding to users with questions; but the language as it stands seems confusing for users. -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_class.relistemp
"David E. Wheeler" writes: > So pgTAP creates temporary tables to store result sets so that it can then > compare the results of two queries. The function in question was getting a > list of columns in such a temporary table in order to make sure that the > types were the same between two such tables before comparing results. It > checked relistemp to make sure it was looking at the temp table rather than > some other table that might happen to have the same name. Well, actually, that code flat out doesn't work, so whether relistemp is available in 9.1 is the least of your problems. Consider what would happen if two concurrent sessions did this with the same temp table name. How about doing this instead? SELECT pg_catalog.format_type(a.atttypid, a.atttypmod) FROM pg_catalog.pg_attribute a JOIN pg_catalog.pg_class c ON a.attrelid = c.oid WHERE c.oid = 'pg_temp.tablenamehere'::pg_catalog.regclass AND attnum > 0 AND NOT attisdropped ORDER BY attnum This would only work in releases that know the pg_temp abbreviation, which includes any minor release later than March 2007. But since relistemp doesn't even exist before 8.4 (released in 2009), that's still more backwards-portable than what you've got. You could also just do 'tablenamehere'::pg_catalog.regclass and trust that the user didn't move pg_temp to the back of the search path. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_class.relistemp
On Fri, Jul 15, 2011 at 17:25, Tom Lane wrote: > Magnus Hagander writes: >> On Thu, Jul 14, 2011 at 21:59, Josh Berkus wrote: >>> So if we're going to break compatibility, then we could stand to make a >>> little noise about it. > >> We've broken the admin apps in pretty much every single release. And >> they generally don't complain. > > Yeah. Quite honestly, this thread is trying to turn a molehill into a > mountain. I will confidently predict that the really big, nasty change > in 9.1 is the change of default standard_conforming_strings. That's > going to break way more apps than anything else, and possibly in > security-critical ways. Anybody who moves an app onto 9.1 without > testing it is going to suffer big-time. +(a lot) in fact, I think we should definitely "shout out" more clearly about that one in the release notes. Yes, it's the very first item. But I think it deserves a big fat warning box as well. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_class.relistemp
Magnus Hagander writes: > On Thu, Jul 14, 2011 at 21:59, Josh Berkus wrote: >> So if we're going to break compatibility, then we could stand to make a >> little noise about it. > We've broken the admin apps in pretty much every single release. And > they generally don't complain. Yeah. Quite honestly, this thread is trying to turn a molehill into a mountain. I will confidently predict that the really big, nasty change in 9.1 is the change of default standard_conforming_strings. That's going to break way more apps than anything else, and possibly in security-critical ways. Anybody who moves an app onto 9.1 without testing it is going to suffer big-time. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is there a committer in the house?
Excerpts from Josh Berkus's message of jue jul 14 15:01:10 -0400 2011: > All, > > Currently we have 8 patches "Ready for Committer" in the current CF. > Some of them have been that status for some time. > > From traffic on this list, I'm getting the impression that nobody other > than Robert, Heikki and Tom are committing other people's patches. I > know we have more committers than that. Bruce? Simon? Itagaki? Bueller? It seems that by mentioning some people but not all, you offended both the people you mentioned (at least some of them, because they are already actively helping) and those that you didn't (at least some of them, because they are already actively helping; those that are not completely inactive in the project, that is). -- Álvaro Herrera The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reduced power consumption in WAL Writer process
Excerpts from Simon Riggs's message of vie jul 15 09:55:40 -0400 2011: > On Fri, Jul 15, 2011 at 2:29 PM, Robert Haas wrote: > > > If the primary goal here is to reduce power consumption, another option > > would be to keep the regular wake-ups most of the time but have some > > mechanism for putting the process to sleep until wakened when no activity > > happens for a certain period of time - say, 10 cycles. I'm not at all sure > > that's better, but it would be less of a change to the existing behavior. > > Now we have them, latches seem the best approach because they (mostly) > avoid heuristics. Yeah, there's no reason for "less of a change" to be a criterion to determine the best way forward. The new tech is clearly a better solution overall, so lets just get rid of the cruft. -- Álvaro Herrera The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ON COMMIT action not catalogued?
Peter Eisentraut writes: > (Going through some loose ends in the information schema ...) > Is my understanding right that the ON COMMIT action of temporary tables > is not recorded in the system catalogs anywhere? Would make sense, > wouldn't be a big problem, just want to make sure I didn't miss > anything. Yes, it's just remembered in local state within the owning backend. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: patch review : Add ability to constrain backend temporary file space
=?ISO-8859-1?Q?C=E9dric_Villemain?= writes: >> On 15/07/11 14:57, Tatsuo Ishii wrote: >>> Maybe we could add more info regarding current usage and requested >>> amount in addition to the temp file limit value. I mean something >>> like: >>> >>> ERROR: aborting due to exceeding temp file limit. Current usage 9000kB, >>> requested size 1024kB, thus it will exceed temp file limit 1kB. > Remember that what will happens is probably: > ERROR: aborting due to exceeding temp file limit. Current usage 8000kB, > requested size 8008kB, thus it will exceed temp file limit 8kB. > because temp file are increased by 8kb at once, rarely more Yes. I think the extra detail Tatsuo proposes is useless and possibly confusing. I don't object to stating the current limit, but the other numbers are not likely to be helpful to anyone. (If we had some way of knowing what the file would ultimately grow to if we allowed the query to continue, that would be useful; but of course we don't know that.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch for distinguishing PG instances in event log
Magnus, Thank you for reviewing my patch. I'm going to modify the patch according to your comments and re-submit it. Before that, I'd like to discuss some points and get your agreement. From: "Magnus Hagander" + + On Windows, you need to register an event source + and its library with the operating system in order + to make use of the eventlog option for + log_destination. + See for details. + * This part is not strictly correct - you don't *need* to do that, it just makes things look nicer, no? How about the following statement? Is it better to say "correctly" instead of "cleanly"? -- On Windows, when you use the eventlog option for log_destination, you need to register an event sourceand its library with the operating system so that the Windows Event Viewer can display event log messages cleanly. -- * Also, what is the use for set_eventlog_parameters()? It's just a string variable, it shuold work without it. Yes, exactly. I'll follow your modification. * We these days avoid #ifdef'ing gucs just because they are not on that platform - so the list is consistent. The guc should be available on non-windows platforms as well. I see. When I saw syslog_ident not being #ifndef WIN32'ed, I thought that was because syslog might be available on Cygwin or MinGW. Anyway, I'll take your modification. * The guc also needs to go in postgresql.conf.sample I missed this completely. * Are we really allowed to call MessageBox in DlLRegisterService? Won't that break badly in cases like silent installs? It seems that the only way to notify the error is MessageBox. Printing to stdout/stderr does not show any messages on the command prompt. I guess that's why the original author of pgevent.c used MessageBox. We cannot know within the DLL if /s (silent) switch was specified to regsvr32.exe. If we want to suppress MessageBox during silent installs, it seems that we need to know the silent install by an environment variable or so. That is: C:\> set PGSILENT=true C:\> regsvr32.exe ... * We never build in unicode mode, so all those checks are unnecessary. Attached is an updated patch, which doesn't work yet. I believe the changes to the backend are correct, but probably some of the cleanups and changes in the dll are incorrect, because I seem to be unable to register either the default or a custom handler so far. The cause of the problem you are encountering is here: + if (pszCmdLine && *pszCmdLine != '\0') + strncpy(event_source, sizeof(event_source), pszCmdLine); + else + strcpy(event_source, "PostgreSQL"); DllInstall() always receives the /i argument in UTF-16. str... functions like strncpy() cannot be used for handling UTF-16 strings. For example, when you run "regsvr32.exe /i:abc ...", DllInstall's pszCmdLine value becomes "a\0b\0c\0". So, strncpy() just copies "a". This is the reason why you cannot register the custom event source. The reason why you cannot register the default event source (i.e. running regsvr32 without /i) is that you memset()ed event_source in DllMain() and set "PostgreSQL" to it in DllInstall(). DllInstall() is not called when /i is not specified. So, event_source remains empty. To solve the problem, we need to use wcstombs_s() instead of strncpy(), and initialize event_source to "PostgreSQL" when it is defined. Regards MauMau -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Understanding GIN posting trees
Oh, I see. You essentially do a merge join of all the posting trees of query keys. Hmm, but we do need to scan all the posting trees of all the matched keys in whole anyway. We could collect all TIDs in the posting lists of all the keys into separate TIDBitmaps, and then combine the bitmaps, calling consistentFn for each TID that was present in at least one bitmap. I guess the performance characteristics of that would be somewhat different from what we have now, and you'd need to keep a lot of in-memory bitmaps if the query contains a lot of keys. I hope to reimplement amgettuple interface someday and this interface is designed for small startup cost. With bitmaps per search key it will be impossible. While we're at it, it just occurred to me that it if the number of query keys is limited, say <= 16, you could build a lookup table for each combination of keys either occurring or not. You could use then use that instead of calling consistentFn for each possible match. You could even use the table to detect common cases like "all/any keys must match", perhaps opening some optimization opportunities elsewhere. I'm afraid that it becomes looking as a separate optimizer/planner :) -- Teodor Sigaev E-mail: teo...@sigaev.ru WWW: http://www.sigaev.ru/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reduced power consumption in WAL Writer process
On Fri, Jul 15, 2011 at 2:29 PM, Robert Haas wrote: > If the primary goal here is to reduce power consumption, another option > would be to keep the regular wake-ups most of the time but have some > mechanism for putting the process to sleep until wakened when no activity > happens for a certain period of time - say, 10 cycles. I'm not at all sure > that's better, but it would be less of a change to the existing behavior. Now we have them, latches seem the best approach because they (mostly) avoid heuristics. This proposal works same or better for async transactions. The only difference is how bulk write operations are handled. As long as we wake WALWriter before wal_buffers fills then we'll be good. Wakeup once per wal buffer is too much. I agree we should measure this to check how frequently wakeups are required for bulk ops. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reduced power consumption in WAL Writer process
My instrumentation wasn't that good. I was using powertop 1.13, which apparently goes to great lengths to group processes by various criteria (including process group), but doesn't actually offer the option of seeing that instrumentation per process. I'm using version 1.98 beta 1 as of now, which provides per-process instrumentation - it only works with Kernel versions 2.6.36+ though. The per process breakdown of wake-ups per second is: 248.3 us/s 5.0Process postgres: wal writer process 281.0 us/s 4.9Process postgres: writer process 82.3 us/s1.1Process postgres: autovacuum launcher process 82.3 us/s0.4Process postgres: stats collector process 442.8 us/s 0.15 Process postgres 23.6 us/s0.15 Process postgres -d1 The second column from the left is wake-ups per second. As previously reported and as you see here, there are about 11.5 idle wake-ups per second per cluster. Note that archiving was running when this instrumentation was performed, but the recently-improved archiver wasn't listed at all, presumably because powertop didn't detect any wake-ups in the period of instrumentation, which was 10 or 20 seconds. As you'd expect, the WAL writer is woken up (1 second / wal_writer_delay milliseconds) times per second. On 14 July 2011 11:00, Simon Riggs wrote: >> That was my first though too - but I wonder if that's too aggressive? A >> backend that does for example a large bulk load will cycle through the >> buffers real quick. It seems like a bad idea to wake up walwriter between >> each buffer in that case. Then again, setting a latch that's already set is >> cheap, so maybe it works fine in practice. >> >> Maybe it would be better to do it less frequently, say, every time you >> switch to new WAL segment. Or every 10 buffers or something like that. > > Yes, that roughly what I'm saying. When nextidx == 0 is just after we > wrapped wal_buffers, i.e. we only wake up the WALWriter every > wal_buffers pages. I'll work towards that implementation. -- Peter Geoghegan http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training and Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reduced power consumption in WAL Writer process
On Jul 14, 2011, at 4:42 AM, Simon Riggs wrote: > On Thu, Jul 14, 2011 at 9:57 AM, Fujii Masao wrote: > >> Currently walwriter might write out the WAL before a transaction commits. >> IOW, walwriter tries to write out the WAL in wal_buffers in every wakeups. >> This might be useful for long transaction which generates lots of WAL >> records before commit. So we should call SetLatch() in XLogInsert() instead >> of RecordTransactionCommit()? Though I'm not sure how much walwriter >> improves the performance of synchronous commit case.. > > Yeh, we did previously have a heuristic to write out the WAL when it > was more than half full. Not sure I want to put exactly that code back > into such a busy code path. > > I suggest that we set latch every time the wal buffers wrap. > > So at the bottom of AdvanceXLInsertBuffer(), if nextidx == 0 then > SetLatch on the WALWriter. > > That's a simple test and we only check it if we're switch WAL buffer page. Seems reasonable at first blush, but I worry that changing the algorithm here could change performance in somewhat unpredictable ways. Some of those might be improvements, but I think some careful measurement would be in order. If the primary goal here is to reduce power consumption, another option would be to keep the regular wake-ups most of the time but have some mechanism for putting the process to sleep until wakened when no activity happens for a certain period of time - say, 10 cycles. I'm not at all sure that's better, but it would be less of a change to the existing behavior. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Single pass vacuum - take 1
On Fri, Jul 15, 2011 at 12:56 AM, Pavan Deolasee wrote: > On Thu, Jul 14, 2011 at 6:22 PM, Simon Riggs wrote: >> This is a very rare issue, because of all the work yourself and Heikki >> have put in. >> > > I don't think its rare case since vacuum on any table, small or large, would > take two passes today. Avoiding one of the two, would help the system in > general. HOT and other things help to a great extent, but unfortunately they > don't make vacuum completely go away. I don't really believe that. To progress this I think we need a clear test case that we can judge this patch against. Building that will show quite how rarely this is a problem. >> It's only important when we have a (1) big table (hence long scan >> time), (2) a use case that avoids HOT *and* (3) we have dirtied a >> large enough section of table that the vacuum map is ineffective and >> we need to scan high % of table. That combination is pretty rare, so >> penalising everybody else with 8 bytes per block seems too much to me. >> > > The additional space is allocated only for those pages which has dead-vacuum > line pointers and would stay only till the next HOT-prune operation on the > page. So everybody does not pay the penalty, even if we assume its too much > and if thats what worries you most. OK, that's the winner. If you're only doing this when it really does matter then I'm happy. My additional requests would be that we can easily tell which blocks have been modified like this, that we have a way to turn this off if we get bugs for next few releases, that we check it all works with Hot Standby just fine and that all the block inspection tools support it. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: patch review : Add ability to constrain backend temporary file space
2011/7/15 Mark Kirkwood : > On 15/07/11 14:57, Tatsuo Ishii wrote: >> >> Maybe we could add more info regarding current usage and requested >> amount in addition to the temp file limit value. I mean something >> like: >> >> ERROR: aborting due to exceeding temp file limit. Current usage 9000kB, >> requested size 1024kB, thus it will exceed temp file limit 1kB. >> > > I modeled the original message on what happens when statement timeout is > exceeded, which doesn't state its limit in the error message at all - > actually I did wonder if there is was informal standard for *not* stating > the value of the limit that is being exceeded! However, I agree with you and > think it makes sense to include it here. I wonder if the additional detail > you are suggesting above might be better added to a HINT - what do you > think? If it is a better idea to just add it in the message as above I can > certainly do that. Remember that what will happens is probably: ERROR: aborting due to exceeding temp file limit. Current usage 8000kB, requested size 8008kB, thus it will exceed temp file limit 8kB. because temp file are increased by 8kb at once, rarely more (and by rare I mean that it can happens via an extension or in the future, not with current core postgresql). This kind of message can also trouble the DBA by suggesting that the query doesn't need more than what was asking at this moment. That's being said I have no strong opinion for the HINT message if the documentation is clear enough to not confuse DBA. -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: patch review : Add ability to constrain backend temporary file space
On 15/07/11 14:57, Tatsuo Ishii wrote: Maybe we could add more info regarding current usage and requested amount in addition to the temp file limit value. I mean something like: ERROR: aborting due to exceeding temp file limit. Current usage 9000kB, requested size 1024kB, thus it will exceed temp file limit 1kB. I modeled the original message on what happens when statement timeout is exceeded, which doesn't state its limit in the error message at all - actually I did wonder if there is was informal standard for *not* stating the value of the limit that is being exceeded! However, I agree with you and think it makes sense to include it here. I wonder if the additional detail you are suggesting above might be better added to a HINT - what do you think? If it is a better idea to just add it in the message as above I can certainly do that. Cheers Mark -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: Fast GiST index build
Fri, Jul 15, 2011 at 12:53 AM, Heikki Linnakangas < heikki.linnakan...@enterprisedb.com> wrote: > On 14.07.2011 23:41, Alexander Korotkov wrote: > >> Do you think using "rightlink" as pointer to parent page is possible >> during >> index build? It would allow to simplify code significantly, because of no >> more need to maintain in-memory structures for parents memorizing. >> > > I guess, but where do you store the rightlink, then? Would you need a final > pass through the index to fix all the rightlinks? > > I think you could use the NSN field. It's used to detect concurrent page > splits, but those can't happen during index build, so you don't need that > field during index build. You just have to make it look like an otherwise > illegal NSN, so that it won't be mistaken for a real NSN after the index is > built. Maybe add a new flag to mean that the NSN is actually invalid. Thank you for advice. But I didn't take into account that in this case I need to update parent link in many pages(which might be not in cache) on split. Seems that I still need to maintain some in-memory structures for parent finding. -- With best regards, Alexander Korotkov.
Re: [HACKERS] Small patch for GiST: move childoffnum to child
On Fri, Jul 15, 2011 at 1:27 PM, Heikki Linnakangas < heikki.linnakan...@enterprisedb.com> wrote: > Ok, committed this now. > Thank you. > I decided to rename the childoffnum field to "downlinkoffnum". I figured > it'd be dangerous that the field means something subtly different in > different versions, if we need to backpatch bug fixes that use that field. Yes, it seems very reasonable. -- With best regards, Alexander Korotkov.
Re: [HACKERS] Small patch for GiST: move childoffnum to child
On 13.07.2011 22:04, Heikki Linnakangas wrote: On 13.07.2011 21:56, Alexander Korotkov wrote: Thank you very much for detail explanation. But this line of modified patch seems strange for me: *newchildoffnum = blkno; I believe it should be: *newchildoffnum = i; Yes, you're right. It's scary that it worked during testing anyway. Maybe the resulting tree was indeed broken but it didn't affect the subsequent inserts so I didn't notice. Ok, committed this now. I decided to rename the childoffnum field to "downlinkoffnum". I figured it'd be dangerous that the field means something subtly different in different versions, if we need to backpatch bug fixes that use that field. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Patch Review: Collect frequency statistics and selectivity estimation for arrays
On Fri, Jul 15, 2011 at 2:13 AM, Nathan Boley wrote: > First, it makes me uncomfortable that you are using the MCV and histogram > slot > kinds in a way that is very different from other data types. > > I realize that tsvector uses MCV in the same way that you do but: > > 1) I don't like that very much either. > 2) TS vector is different in that equality ( in the btree sense ) >doesn't make sense, whereas it does for arrays. > Actually, not MCV slot is used but MCELEM. It is a different slot. ps_stats view map both into same fileds. Surely, non-standard use of histogram slot should be avoided. -- With best regards, Alexander Korotkov.
Re: [HACKERS] Small patch for GiST: move childoffnum to child
On 14.07.2011 13:29, Alexander Korotkov wrote: On Thu, Jul 14, 2011 at 12:56 PM, Heikki Linnakangas< heikki.linnakan...@enterprisedb.com> wrote: First, notice that we're setting "ptr->parent = top". 'top' is the current node we're processing, and ptr represents the node to the right of the current node. The current node is *not* the parent of the node to the right. I believe that line should be "ptr->parent = top->parent". I think same. Second, we're adding the entry for the right sibling to the end of the list of nodes to visit. But when we process entries from the list, we exit immediately when we see a leaf page. That means that the right sibling can get queued up behind leaf pages, and thus never visited. I think possible solution is to save right sibling immediatly after current page . Thus, this code fragment should looks like this: if (top->parent&& XLByteLT(top->parent->lsn, GistPageGetOpaque(page)->nsn)&& GistPageGetOpaque(page)->**rightlink != InvalidBlockNumber /* sanity check */ ) { /* page splited while we thinking of... */ ptr = (GISTInsertStack *) palloc0(sizeof(** GISTInsertStack)); ptr->blkno = GistPageGetOpaque(page)->**rightlink; ptr->childoffnum = InvalidOffsetNumber; ptr->parent = top->parent; ptr->next = top->next; top->next = ptr; if (tail == top); tail = ptr; } Agreed, committed. Thanks for verifying my thinking. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_class.relistemp
On Thu, Jul 14, 2011 at 10:54 PM, Josh Berkus wrote: > >> As one of said vendors, I completely disagree. > > I don't agree that you qualify as a vendor. You're on the friggin' core > team. And I look after the development of the leading open source management tool for PostgreSQL, as well as a number of tools at EnterpriseDB. I might be on the core team, but I still build tools. > I'm talking about vendors like DBVizualizer or TORA, for which > PostgreSQL is just one of the databases they support. If stuff breaks > gratuitously, the reaction of some of them is always to either drop > support or delay it for a year or more. This doesn't benefit our community. I'm not talking about gratuitously breaking things - just not masking essential changes. >> There are a ton of >> things that change with each release, and all we do by putting in >> hacks for backwards compatibility is add bloat that needs to be >> maintained, and encourage vendors to be lazy. > > I don't agree that having comprehensive system views with multi-version > stability would be a "hack". That isn't what was being suggested. >> Break compatibility is actually something that is important to us - it >> forces us to fix obvious issues, and makes it much harder to >> inadvertently miss important changes. > > What I'm hearing from you is: "Breaking backwards compatibility is > something we should do more of because it lets us know which vendors are > paying attention and weeds out the unfit." Is that what you meant to say? No, I meant to say precisely what I did say. By not masking changes in the catalogs, we draw attention to things that have changed for good reason, and almost certainly need to be addressed. > That seems like a way to ensure that PostgreSQL support continue to be > considered optional, or based on outdated versions, for multi-database > tools. Whilst this particular case might be safe to just ignore in third part tools, other changes to the catalogs are not, and masking them could potentially hide bugs or issues that need to be fixed to actually work properly with the newer version of the server. -- Dave Page Blog: http://pgsnake.blogspot.com Twitter: @pgsnake EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Patch Review: Collect frequency statistics and selectivity estimation for arrays
Hi! Thank you for review. I've few questions about it. On Fri, Jul 15, 2011 at 2:13 AM, Nathan Boley wrote: > First, it makes me uncomfortable that you are using the MCV and histogram > slot > kinds in a way that is very different from other data types. > > I realize that tsvector uses MCV in the same way that you do but: > > 1) I don't like that very much either. > 2) TS vector is different in that equality ( in the btree sense ) >doesn't make sense, whereas it does for arrays. > > Using the histogram slot for the array lengths is also very surprising to > me. > > Why not just use a new STA_KIND? It's not like we are running out of > room, and this will be the second 'container' type that splits the > container > and stores stats about the elements. > Thus, do you think we should collect both btree and frequency/length statistics for arrays? > 1) In calc_distr you go to some lengths to avoid round off errors. Since it > is > certainly just the order of the estimate that matters, why not just > perform the calculation in log space? > It seems to me that I didn't anything to avoid round off errors there... -- With best regards, Alexander Korotkov.
Re: [HACKERS] patch: enhanced get diagnostics statement 2
Hello 2011/7/14 Alvaro Herrera : > Excerpts from Pavel Stehule's message of jue jul 14 16:25:56 -0400 2011: >> 2011/7/14 Alvaro Herrera : >> > A couple items for this patch: > >> it is good idea > > Thanks ... I expect you're going to resubmit the patch based on David's > last version and my comments? > yes, see a attachment Regards Pavel > -- > Álvaro Herrera > The PostgreSQL Company - Command Prompt, Inc. > PostgreSQL Replication, Consulting, Custom Development, 24x7 support > *** ./doc/src/sgml/plpgsql.sgml.orig 2011-07-15 07:53:03.069787671 +0200 --- ./doc/src/sgml/plpgsql.sgml 2011-07-15 08:36:00.504591377 +0200 *** *** 1387,1393 command, which has the form: ! GET DIAGNOSTICS variable = item , ... ; This command allows retrieval of system status indicators. Each --- 1387,1393 command, which has the form: ! GET CURRENT DIAGNOSTICS variable = item , ... ; This command allows retrieval of system status indicators. Each *** *** 1488,1493 --- 1488,1580 + + Obtaining the Exception Status + + + Inside an exception handler, one may retrieve detailed + information about the current exception using THE + GET STACKED DIAGNOSTICS command, which has the form: + + + GET STACKED DIAGNOSTICS variable = item , ... ; + + + + + This command allows retrieval of the exception's data. Each + item is a key word identifying a state + value to be assigned to the specified variable (which should be + of the right data type to receive it). The currently available + status items are: + + + Stacked diagnostics values + + + + Name + Return type + Description + + + + + RETURNED_SQLSTATE + text + the SQLSTATE of the exception + + + MESSAGE_TEXT + text + the text of the exception's message + + + PG_EXCEPTION_DETAIL + text + the text of the exception's detail message + + + PG_EXCEPTION_HINT + text + the text of the exception's hint message + + + PG_EXCEPTION_CONTEXT + text + lines of text describing the call stack + + + + + + + + If an exception does not contain a value for an item, an empty string + will be returned. + + + + An example: + + DECLARE + text_var1 text; + text_var2 text; + text_var3 text; + BEGIN + -- some processing which might cause an exception + ... + EXCEPTION WHEN OTHERS THEN + GET STACKED DIAGNOSTICS text_var1 = MESSAGE_TEXT, + text_var2 = PG_EXCEPTION_DETAIL, + text_var3 = PG_EXCEPTION_HINT; + END; + + + + + + Doing Nothing At All *** ./src/backend/utils/errcodes.txt.orig 2011-07-15 07:53:03.070787661 +0200 --- ./src/backend/utils/errcodes.txt 2011-07-15 08:01:04.522609180 +0200 *** *** 132,137 --- 132,140 0P000EERRCODE_INVALID_ROLE_SPECIFICATION invalid_role_specification + Section: Class 0Z - Diagnostics Exception + 0Z002EERRCODE_STACKED_DIAGNOSTICS_ACCESSED_WITHOUT_ACTIVE_HANDLERstacked_diagnostics_accessed_without_active_handler + Section: Class 20 - Case Not Found 2EERRCODE_CASE_NOT_FOUND case_not_found *** ./src/pl/plpgsql/src/gram.y.orig 2011-07-15 07:53:03.071787651 +0200 --- ./src/pl/plpgsql/src/gram.y 2011-07-15 09:29:27.959407772 +0200 *** *** 206,211 --- 206,212 %type getdiag_list %type getdiag_list_item %type getdiag_item getdiag_target + %type getdiag_opt %type opt_scrollable %type opt_fetch_direction *** *** 250,256 --- 251,259 %token K_CLOSE %token K_COLLATE %token K_CONSTANT + %token K_CONTEXT %token K_CONTINUE + %token K_CURRENT %token K_CURSOR %token K_DEBUG %token K_DECLARE *** *** 263,268 --- 266,274 %token K_END %token K_ERRCODE %token K_ERROR + %token K_EXCEPTION_CONTEXT + %token K_EXCEPTION_DETAIL + %token K_EXCEPTION_HINT %token K_EXCEPTION %token K_EXECUTE %token K_EXIT *** *** 284,289 --- 290,296 %token K_LOG %token K_LOOP %token K_MESSAGE + %token K_MESSAGE_TEXT %token K_MOVE %token K_NEXT %token K_NO *** *** 300,311 --- 307,320 %token K_RELATIVE %token K_RESULT_OID %token K_RETURN + %token K_RETURNED_SQLSTATE %token K_REVERSE %token K_ROWTYPE %token K_ROW_COUNT %token K_SCROLL %token K_SLICE %token K_SQLSTATE + %token K_STACKED %token K_STRICT %token K_THEN %token K_TO ***