Re: [HACKERS] [GENERAL] Creating temp tables inside read only transactions
On Jul11, 2011, at 21:49 , David Johnston wrote: Right now I can emulate a hierarchical schema structure via a naming scheme - for example schemabase_sub1_sub2_etc. I am simply looking for a formal way to do the above AND also tell the system that I want all schemas under schemabase to be in the search path. Heck, I guess just allowing for simply pattern matching in search_path would be useful in this case regardless of the presence of an actual schema hierarchy. Using LIKE syntax say: SET search_path TO schemabase_sub1_% or something similar. create function set_searchpath_expand(v_pattern text) returns void as $$ declare v_searchpath text; begin select string_agg(quote_ident(nspname), ',') into v_searchpath from pg_catalog.pg_namespace where nspname like v_pattern; execute 'set search_path = ' || v_searchpath; end $$ language plpgsql; best regards, Florian Pflug -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: Fast GiST index build
On Fri, Jul 8, 2011 at 6:18 PM, Tom Lane t...@sss.pgh.pa.us wrote: For test purposes, you could turn off synchronize_seqscans to prevent that. Thanks, it helps. I'm rerunning tests now. -- With best regards, Alexander Korotkov.
Re: [HACKERS] per-column generic option
(2011/07/12 0:44), Peter Eisentraut wrote: On lör, 2011-07-09 at 23:49 -0400, Alvaro Herrera wrote: The new ALTER TABLE grammar seems a bit strange -- ADD, SET, DROP. Is this defined by the SQL/MED standard? It seems at odds with our handling of attoptions Well, I believe the SQL/MED options were actually implemented first and the attoptions afterwards. But it's probably not unwise to keep them separate, even though the syntaxes could have been made more similar. As you say, syntax for attoptions/reloptions seem to satisfy the requirement of SQL/MED; SET for ADD/SET and RESET for DROP. But at this time it would break backward compatibility. I think it's reasonable to unify the syntax for handling SQL/MED options at every level to OPTIONS (key 'value', ...). Regards, -- Shigeru Hanada -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] make_greater_string() does not return a string in some cases
This is an update of a patch for NEXT CommitFest 2011/09. Please ignore this message. 1 Additional Feature - EUC-JP incrementer 2 Bug fixes - bytea incrementer, libpq compilation. -- Kyotaro Horiguchi NTT Open Source Software Center diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c index 10b73fb..48a58a0 100644 --- a/src/backend/utils/adt/selfuncs.c +++ b/src/backend/utils/adt/selfuncs.c @@ -5502,6 +5502,18 @@ pattern_selectivity(Const *patt, Pattern_Type ptype) /* + * This function is character increment function for bytea used in + * make_greater_string() that has same interface with pg_wchar_tbl.charinc. + */ +static bool byte_increment(unsigned char *ptr, int len) +{ + if (*ptr = 255) return false; + + (*ptr)++; + return true; +} + +/* * Try to generate a string greater than the given string or any * string it is a prefix of. If successful, return a palloc'd string * in the form of a Const node; else return NULL. @@ -5540,6 +5552,7 @@ make_greater_string(const Const *str_const, FmgrInfo *ltproc, Oid collation) int len; Datum cmpstr; text *cmptxt = NULL; + character_incrementer charincfunc; /* * Get a modifiable copy of the prefix string in C-string format, and set @@ -5601,27 +5614,38 @@ make_greater_string(const Const *str_const, FmgrInfo *ltproc, Oid collation) } } + if (datatype != BYTEAOID) + charincfunc = pg_database_encoding_character_incrementer(); + else + charincfunc = byte_increment; + while (len 0) { - unsigned char *lastchar = (unsigned char *) (workstr + len - 1); - unsigned char savelastchar = *lastchar; + int charlen; + unsigned char *lastchar; + unsigned char savelastbyte; + Const *workstr_const; + + if (datatype == BYTEAOID) + charlen = 1; + else + charlen = len - pg_mbcliplen(workstr, len, len - 1); + + lastchar = (unsigned char *) (workstr + len - charlen); /* -* Try to generate a larger string by incrementing the last byte. +* savelastbyte has meaning only for datatype == BYTEAOID */ - while (*lastchar (unsigned char) 255) - { - Const *workstr_const; + savelastbyte = *lastchar; - (*lastchar)++; + /* +* Try to generate a larger string by incrementing the last byte or +* character. +*/ + if (charincfunc(lastchar, charlen)) { if (datatype != BYTEAOID) - { - /* do not generate invalid encoding sequences */ - if (!pg_verifymbstr(workstr, len, true)) - continue; workstr_const = string_to_const(workstr, datatype); - } else workstr_const = string_to_bytea_const(workstr, len); @@ -5636,26 +5660,17 @@ make_greater_string(const Const *str_const, FmgrInfo *ltproc, Oid collation) pfree(workstr); return workstr_const; } - + /* No good, release unusable value and try again */ pfree(DatumGetPointer(workstr_const-constvalue)); pfree(workstr_const); } - /* restore last byte so we don't confuse pg_mbcliplen */ - *lastchar = savelastchar; - /* -* Truncate off the last character, which might be more than 1 byte, -* depending on the character encoding. +* Truncate off the last character or restore last byte for BYTEA. */ - if (datatype != BYTEAOID pg_database_encoding_max_length() 1) - len = pg_mbcliplen(workstr, len, len - 1); - else - len -= 1; - - if (datatype != BYTEAOID) - workstr[len] = '\0'; + len -= charlen; + workstr[len] = (datatype != BYTEAOID ? '\0' : savelastbyte); } /* Failed... */ diff --git a/src/backend/utils/mb/wchar.c b/src/backend/utils/mb/wchar.c index 5b0cf62..8505bcb 100644 --- a/src/backend/utils/mb/wchar.c +++ b/src/backend/utils/mb/wchar.c @@ -1336,53 +1336,254 @@ pg_utf8_islegal(const unsigned char *source, int length) /* *--- + * character
Re: [HACKERS] Online base backup from the hot-standby
This version of the patch adds a field into pg_controldata that tries to store the source of the base backup while in recovery mode. I think your ultimate goal with this patch is to be able to take a backup of a running hot-standby slave and recover it as another instance. This patch seems to provide the ability to have the second slave stop recovery at minRecoveryPoint from the control file. My understanding of the procedure you want to get to to take base backups off a slave is 1. execute pg_start_backup('x') on the slave (*) 2. take a backup of the data dir 3. call pg_stop_backup() on the slave 4. Copy the control file on the slave This patch only addresses the recovery portions. Yes. I don't think the above comment is very clear on what backupserver is. Perhaps /** * backupserver is used while postgresql is in recovery mode to * store the location of where the backup comes from. * When Postgres starts recovery operations * it is set to none. During recovery it is updated to either master, or slave * When recovery operations finish it is updated back to none **/ Done. Also shouldn't backupServer be the enum type of 'BackupServer' not int? Other enums in the structure such as DBState are defined this way. Now, this is a same as wal_level, not DBState. No? Since I can't yet call pg_start_backup or pg_stop_backup() on the slave I am calling them on the master. (I also did some testing where I didn't put the system into backup mode). I admit that I am not sure what to look for as an indication that the system isn't recovering to the correct point. In much of my testing I was just verifying that the slave started and my data 'looked' okay. Updated patch as can execute pg_start/stop_backup() on standby server. One-pass of above steps(from 1. to 4.) is now done on this. However, there are conditions. * Master's full_page_write = on. * On the slave, do not execute stop/promote operation before pg_stop_backup() is executed. * the result of pg_start_backup() may exceed the result of pg_stop_backup(). I seem to get this warning in my logs when I start up the instance based on the slave backup. LOG: 0: database system was interrupted while in recovery at log time 2011-07-08 18:40:20 EDT HINT: If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target I'm wondering if this warning is a bit misleading to users because it is an expected message when starting up an instance based on a slave backup (because the slave was already in recovery mode). If I shutdown this instance and start it up again I keep getting the warning. My understanding of your patch is that there shouldn't be any risk of corruption in that case (assuming your patch has no bugs). Can/should we be suppressing this message when we detect that we are recovering from a slave backup? This has not been supported yet. I do not see what state of this message. Always happens when backup is taken from slave. What do you think about an approach to add context, unless take backup from slave? The direction of the patch has changed a bit during this commit fest. I think it would be good to provide an update on the rest of the changes you plan for this to be a complete useable feature. That would make it easier to comment on something you missed versus something your planning on dealing with in the next stage. I see. I will provide a patch which can exeute pg_start/stop_backup including to solve above comment and conditions in next stage. Then please review. I change this patch status to Returned with feedback. Regards. Jun Ishizuka NTT Software Corporation TEL:045-317-7018 E-Mail: ishizuka@po.ntts.co.jp standby_online_backup_04.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: Fast GiST index build
New version of patch with a little more refactoring and comments. -- With best regards, Alexander Korotkov. gist_fast_build-0.6.0.patch.gz Description: GNU Zip compressed data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Patch Review: Bugfix for XPATH() if text or attribute nodes are selected
On Sun, 10 Jul 2011 17:06:22 -0500, Robert Haas wrote: On Jul 10, 2011, at 1:40 PM, Josh Berkus j...@agliodbs.com wrote: Hackers, B. 6. Current behaviour _is intended_ (there is if to check node type) and _natural_. In this particular case user ask for text content of some node, and this content is actually . I don't buy that. The check for the node type is there because two different libxml functions are used to convert nodes to strings. The if has absolutely *zero* to do with escaping, expect for that missing escape_xml() call in the else case. Secondly, there is little point in having an type XML if we don't actually ensure that values of that type can only contain well-formed XML. Can anyone else weigh in on this? Peter? Unless I am missing something, Florian is clearly correct here. ...Robert For me not, because this should be fixed internally by making xml type sefe, currently xml type may be used to keep proper XMLs and any kind of data, as well. If I ask, by any means select xpath(/text(...)). I want to get text. 1) How I should descape node in client application (if it's part of xml I don't have header), bear in mind XML must give support for streaming processing too. 2) Why I should differntly treat text() then select from varchar in both I ask for xml, driver can't make this, because it doesn't know if it gets scalar, text, comment, element, or maybe document. 3) What about current applications, folks probably uses this and are happy they get text, and will not see, that next release of PostgreSQL will break their applications. There is of course disadvantage of current behaviour as it may lead to inserting badly xmls (in one case), but I created example when auto escaping will create double escaped xmls, and may lead to insert inproper data (this is about 2nd patch where Florian add escaping, too). SELECT XMLELEMENT(name root, XMLATTRIBUTES(foo.namespace AS sth)) FROM (SELECT (XPATH('namespace-uri(/*)', x))[1] AS namespace FROM (VALUES (XMLELEMENT(name root, XMLATTRIBUTES('n' AS xmlns, 'v' AS value),'t'))) v(x)) as foo; xmlelement - root sth=amp;lt;n/ It can't be resolved without storing type in xml or adding xmltext or adding pseudo xmlany element, which will be returned by xpath. Regards, Radek -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Patch Review: Bugfix for XPATH() if text or attribute nodes are selected
On Jul12, 2011, at 11:00 , Radosław Smogura wrote: On Sun, 10 Jul 2011 17:06:22 -0500, Robert Haas wrote: Unless I am missing something, Florian is clearly correct here. For me not, because this should be fixed internally by making xml type sefe Huh??. Making the xml type safe is *exactly* what I'm trying to do here... currently xml type may be used to keep proper XMLs and any kind of data, as well. As I pointed out before, that simply isn't true. Try storing non-well-formed data into an XML column (there *are* ways to do that, i.e. there are bugs, one if which I'm trying to fix here!) and then dump and (try to) reload your database. Ka-wom! If I ask, by any means select xpath(/text(...)). I want to get text. And I want '3' || '4' to return the integer 34. Though luck! The fact that XPATH() is declared to return XML, *not* TEXT means you don't get what you want. Period. Feel free to provide a patch that adds a function XPATH_TEXT if you feel this is an issue. XML *is* *not* simply an alias for TEXT! It's a distinct type, which its down distinct rules about what constitutes a valid value and what doesn't. 1) How I should descape node in client application (if it's part of xml I don't have header), bear in mind XML must give support for streaming processing too. Huh? 2) Why I should differntly treat text() then select from varchar in both I ask for xml, driver can't make this, because it doesn't know if it gets scalar, text, comment, element, or maybe document. 3) What about current applications, folks probably uses this and are happy they get text, and will not see, that next release of PostgreSQL will break their applications. That, and *only* that, I recognize as a valid concern. However, and *again* as I have pointer out before a *multiple* of times, backwards compatibility is no excuse not to fix bugs. Plus, there might just as well be applications which feed the contents of XML columns directly into a XML parser (as they have every right to!) and don't expect that parser to throw an error. Which, as it stands, we cannot guarantee. Having to deal with an error there is akin to having to deal with integer columns containing 'foobar'! There is of course disadvantage of current behaviour as it may lead to inserting badly xmls (in one case), but I created example when auto escaping will create double escaped xmls, and may lead to insert inproper data (this is about 2nd patch where Florian add escaping, too). SELECT XMLELEMENT(name root, XMLATTRIBUTES(foo.namespace AS sth)) FROM (SELECT (XPATH('namespace-uri(/*)', x))[1] AS namespace FROM (VALUES (XMLELEMENT(name root, XMLATTRIBUTES('n' AS xmlns, 'v' AS value),'t'))) v(x)) as foo; xmlelement - root sth=amp;lt;n/ Radosław, you've raised that point before, and I refuted it. The crucial difference is that double-escaped values are well-formed, where as un-escaped ones are not. Again, as I said before, the double-escaping done by XMLATTRIBUTES there is not pretty. But its *not* XPATH()'s fault!. To see that, simply replace your XPATH() expression with 'lt;n'::xml to see that. And in fact It can't be resolved without storing type in xml or adding xmltext or adding pseudo xmlany element, which will be returned by xpath. Huh? Frankly, Radosław, I get the feeling that you're not trying to understand my answers to your objections, but instead keep repeating the same assertions over and over again. Even though at least some of them, like XML being able to store arbitrary values, are simply wrong! And I'm getting pretty tired of this... So far, you also don't seem to have taken a single look at the actual implementation of the patch, even though code review is an supposed to be an integral part of the patch review process. I therefore don't believe that we're getting anywhere here. So please either start reviewing the actual implementation, and leave the considerations about whether we want this or not to the eventual committer. Or, if you don't want to do that for one reason or another, pleaser consider letting somebody else take over this review, i.e. consider removing your name from the Reviewer field. best regards, Florian Pflug -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] dropping table in testcase alter_table.sql
On fre, 2011-07-08 at 22:27 -0400, Robert Haas wrote: On Fri, Jul 8, 2011 at 1:45 AM, Ashutosh Bapat ashutosh.ba...@enterprisedb.com wrote: I think, tab1 and tab2 are too common names, for anyone to pick up for the tables. Also, the test alter_table.sql is dropping many other tables (even those which have undergone renaming), then why not these two? Beats me, but I don't see any particular value to changing it. It has occurred to me a few times that it could be useful to clarify the approach here. If we could somehow have a separable cleanup step for every test, and eliminate interdependencies between tests, we could more easily support a number of uses cases such as creating a completely populated regression test database for playing, or running tests in random order or in differently parallelized scenarios. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [v9.2] DROP Reworks Part.1 - Consolidate routines to handle DropStmt
On ons, 2011-07-06 at 12:40 -0400, Robert Haas wrote: I think perhaps we should create a big static array, each row of which would contain: - ObjectType - system cache ID for OID lookups - system catalog table OID for scans - attribute number for the name attribute, where applicable (see AlterObjectNamespace) - attribute number for the namespace attribute - attribute number for the owner attribute - ...and maybe some other properties Yeah, I was thinking of the same thing a while ago. For large parts of the DDL support for collations I literally copied over the conversion support and ran sed over it. That must be made better. Take that as a test case if you will. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] Creating temp tables inside read only transactions
On 12 Jul 2011, at 3:25, Chris Travers wrote: Right now I can emulate a hierarchical schema structure via a naming scheme - for example schemabase_sub1_sub2_etc. I am simply looking for a formal way to do the above AND also tell the system that I want all schemas under schemabase to be in the search path. Heck, I guess just allowing for simply pattern matching in search_path would be useful in this case regardless of the presence of an actual schema hierarchy. Using LIKE syntax say: SET search_path TO schemabase_sub1_% or something similar. The only missing ability becomes a way for graphical tools to represent the schema hierarchy using a tree-structure with multiple depths. Right. Semantically myapp_schemaname_subschemaname is no less hierarchical than myapp.schemaname.subschemaname. It is. To the database, your first example is a single identifier, while your second example is a sequential combination of three identifiers. The second one contains explicit hierarchy, the first one does not. It's quite possible that the fact that these identifiers have a sequence is the biggest problem for implementing this in a relational database. Relational databases work with sets after all, which have no explicit sequence. With the introduction of recursive queries that's _possible_, but as claimed earlier (and I tend to agree), for performance reasons it is undesirable to apply this to system tables. If we were talking about a _set_ of identifiers instead, without the requirement of a hierarchy (eg. myapp.schemaname.subschemaname = subschemaname.myapp.schemaname), implementation would probably be easier/perform better. That does have some interesting implications for incompletely specified sets of namespaces, I'm not sure how desirable they are. What's cool is that you can specify just a server hostname and a table-name and (as long as there's no ambiguity) that's sufficient. Not so cool, if you use the above and someone clones the database on said host, you've suddenly introduced ambiguity. Problems like that are likely to turn up with incomplete identifier specifications though, just a bit more likely to happen if you take the meaning of the sequence of the identifiers out. Just a bit. The larger issue is that of potential ambiguity wrt cross-database references. Not necessarily, if the reference to a remote database is unambiguously recognisable as such, for example by using some kind of URI notation (eg. dsn://user@remote-database1). I'm also wondering how to handle this for multi-master replicated environments, in view of load-balancing. Those remote database references probably need to reference different databases depending on which master they're running on? From a security point-of-view I'd probably require a list of accessible remote databases per server (so that people cannot just query any database of their choice). That could also serve the load-balancing scenario. Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll see there is no forest. !DSPAM:1286,4e1c1e2912091672620445! -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Patch Review: Bugfix for XPATH() if text or attribute nodes are selected
On Tue, 12 Jul 2011 11:45:59 +0200, Florian Pflug wrote: On Jul12, 2011, at 11:00 , Radosław Smogura wrote: On Sun, 10 Jul 2011 17:06:22 -0500, Robert Haas wrote: Unless I am missing something, Florian is clearly correct here. For me not, because this should be fixed internally by making xml type sefe Huh??. Making the xml type safe is *exactly* what I'm trying to do here... currently xml type may be used to keep proper XMLs and any kind of data, as well. As I pointed out before, that simply isn't true. Try storing non-well-formed data into an XML column (there *are* ways to do that, i.e. there are bugs, one if which I'm trying to fix here!) and then dump and (try to) reload your database. Ka-wom! If I ask, by any means select xpath(/text(...)). I want to get text. And I want '3' || '4' to return the integer 34. Though luck! The fact that XPATH() is declared to return XML, *not* TEXT means you don't get what you want. Period. Feel free to provide a patch that adds a function XPATH_TEXT if you feel this is an issue. XML *is* *not* simply an alias for TEXT! It's a distinct type, which its down distinct rules about what constitutes a valid value and what doesn't. 1) How I should descape node in client application (if it's part of xml I don't have header), bear in mind XML must give support for streaming processing too. Huh? 2) Why I should differntly treat text() then select from varchar in both I ask for xml, driver can't make this, because it doesn't know if it gets scalar, text, comment, element, or maybe document. 3) What about current applications, folks probably uses this and are happy they get text, and will not see, that next release of PostgreSQL will break their applications. That, and *only* that, I recognize as a valid concern. However, and *again* as I have pointer out before a *multiple* of times, backwards compatibility is no excuse not to fix bugs. Plus, there might just as well be applications which feed the contents of XML columns directly into a XML parser (as they have every right to!) and don't expect that parser to throw an error. Which, as it stands, we cannot guarantee. Having to deal with an error there is akin to having to deal with integer columns containing 'foobar'! Bugs must be resolved in smart way, especially if they changes behaviour, with consideration of impact change will produce, removing support for xml resolves this bug as well. I've said problem should be resolved in different way. There is of course disadvantage of current behaviour as it may lead to inserting badly xmls (in one case), but I created example when auto escaping will create double escaped xmls, and may lead to insert inproper data (this is about 2nd patch where Florian add escaping, too). SELECT XMLELEMENT(name root, XMLATTRIBUTES(foo.namespace AS sth)) FROM (SELECT (XPATH('namespace-uri(/*)', x))[1] AS namespace FROM (VALUES (XMLELEMENT(name root, XMLATTRIBUTES('n' AS xmlns, 'v' AS value),'t'))) v(x)) as foo; xmlelement - root sth=amp;lt;n/ Radosław, you've raised that point before, and I refuted it. The crucial difference is that double-escaped values are well-formed, where as un-escaped ones are not. Again, as I said before, the double-escaping done by XMLATTRIBUTES there is not pretty. But its *not* XPATH()'s fault!. To see that, simply replace your XPATH() expression with 'lt;n'::xml to see that. And in fact It can't be resolved without storing type in xml or adding xmltext or adding pseudo xmlany element, which will be returned by xpath. Huh? Frankly, Radosław, I get the feeling that you're not trying to understand my answers to your objections, but instead keep repeating the same assertions over and over again. Even though at least some of them, like XML being able to store arbitrary values, are simply wrong! And I'm getting pretty tired of this... So far, you also don't seem to have taken a single look at the actual implementation of the patch, even though code review is an supposed to be an integral part of the patch review process. I therefore don't believe that we're getting anywhere here. So far, you don't know if I taken a single look, your suspicious are wrong, and You try to blame me. All of your sentences about do not understanding I may sent to you, and blame you with your words. So please either start reviewing the actual implementation, and leave the considerations about whether we want this or not to the eventual committer. Or, if you don't want to do that for one reason or another, pleaser consider letting somebody else take over this review, i.e. consider removing your name from the Reviewer field. If I do review I may put my comments, but I get the feeling that you're not trying to understand my answers to your objections, but instead keep repeating the same assertions over and over again. - and in patch there is review of code. So please either
Re: [HACKERS] dropping table in testcase alter_table.sql
On Jul 12, 2011, at 4:46 AM, Peter Eisentraut pete...@gmx.net wrote: On fre, 2011-07-08 at 22:27 -0400, Robert Haas wrote: On Fri, Jul 8, 2011 at 1:45 AM, Ashutosh Bapat ashutosh.ba...@enterprisedb.com wrote: I think, tab1 and tab2 are too common names, for anyone to pick up for the tables. Also, the test alter_table.sql is dropping many other tables (even those which have undergone renaming), then why not these two? Beats me, but I don't see any particular value to changing it. It has occurred to me a few times that it could be useful to clarify the approach here. If we could somehow have a separable cleanup step for every test, and eliminate interdependencies between tests, we could more easily support a number of uses cases such as creating a completely populated regression test database for playing, or running tests in random order or in differently parallelized scenarios. True. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] per-column generic option
On Jul 12, 2011, at 12:31 AM, Shigeru Hanada shigeru.han...@gmail.com wrote: (2011/07/11 10:21), Robert Haas wrote: On Jul 9, 2011, at 10:49 PM, Alvaro Herreraalvhe...@commandprompt.com wrote: In short: in my opinion, attoptions and attfdwoptions need to be one thing and the same. I feel the opposite. In particular, what happens when a future release of PostgreSQL adds an attoption that happens to have the same name as somebody's per-column FDW option? Something breaks, that's what... Another point: We don't commingle these concepts at the table level. It doesn't make sense to have table reloptions separate from table FDW options but then go and make the opposite decision at the column level. I'm afraid that I've misunderstood the discussion. Do you mean that per-table options should be stored in reloptions, but per-column should be separated from attoptions? (I think I've misread...) No, I was arguing that they should both be separate. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] per-column generic option
(2011/07/12 21:19), Robert Haas wrote: On Jul 12, 2011, at 12:31 AM, Shigeru Hanadashigeru.han...@gmail.com wrote: I'm afraid that I've misunderstood the discussion. Do you mean that per-table options should be stored in reloptions, but per-column should be separated from attoptions? (I think I've misread...) No, I was arguing that they should both be separate. Thanks, I'm relieved. :) Regards, -- Shigeru Hanada -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Patch Review: Bugfix for XPATH() if text or attribute nodes are selected
On Jul12, 2011, at 12:57 , Radosław Smogura wrote: On Tue, 12 Jul 2011 11:45:59 +0200, Florian Pflug wrote: On Jul12, 2011, at 11:00 , Radosław Smogura wrote: On Sun, 10 Jul 2011 17:06:22 -0500, Robert Haas wrote: Unless I am missing something, Florian is clearly correct here. For me not, because this should be fixed internally by making xml type sefe Huh??. Making the xml type safe is *exactly* what I'm trying to do here... currently xml type may be used to keep proper XMLs and any kind of data, as well. As I pointed out before, that simply isn't true. Try storing non-well-formed data into an XML column (there *are* ways to do that, i.e. there are bugs, one if which I'm trying to fix here!) and then dump and (try to) reload your database. Ka-wom! You again very conveniently ignored me here, and thus the *fact* that XML *doesn't* allow arbitrary textual values to be stored. If it did, there would not be a Ka-wom! here. I beg you to actually try this out. Put the result of an XPATH() expression that returns a literal '' into a column of type XML, and dump and reload. If I ask, by any means select xpath(/text(...)). I want to get text. And I want '3' || '4' to return the integer 34. Though luck! The fact that XPATH() is declared to return XML, *not* TEXT means you don't get what you want. Period. Feel free to provide a patch that adds a function XPATH_TEXT if you feel this is an issue. XML *is* *not* simply an alias for TEXT! It's a distinct type, which its down distinct rules about what constitutes a valid value and what doesn't. Again, you ignored my answer. 3) What about current applications, folks probably uses this and are happy they get text, and will not see, that next release of PostgreSQL will break their applications. That, and *only* that, I recognize as a valid concern. However, and *again* as I have pointer out before a *multiple* of times, backwards compatibility is no excuse not to fix bugs. Plus, there might just as well be applications which feed the contents of XML columns directly into a XML parser (as they have every right to!) and don't expect that parser to throw an error. Which, as it stands, we cannot guarantee. Having to deal with an error there is akin to having to deal with integer columns containing 'foobar'! Bugs must be resolved in smart way, especially if they changes behaviour, with consideration of impact change will produce, removing support for xml resolves this bug as well. I've said problem should be resolved in different way. Fine. So what does that different way look like? Keeping things as they are is certainly not an option, since it failed as soon as you dump and reload (Or simply cast the value to TEXT and back to XML). There is of course disadvantage of current behaviour as it may lead to inserting badly xmls (in one case), but I created example when auto escaping will create double escaped xmls, and may lead to insert inproper data (this is about 2nd patch where Florian add escaping, too). SELECT XMLELEMENT(name root, XMLATTRIBUTES(foo.namespace AS sth)) FROM (SELECT (XPATH('namespace-uri(/*)', x))[1] AS namespace FROM (VALUES (XMLELEMENT(name root, XMLATTRIBUTES('n' AS xmlns, 'v' AS value),'t'))) v(x)) as foo; xmlelement - root sth=amp;lt;n/ Radosław, you've raised that point before, and I refuted it. The crucial difference is that double-escaped values are well-formed, where as un-escaped ones are not. Again, as I said before, the double-escaping done by XMLATTRIBUTES there is not pretty. But its *not* XPATH()'s fault!. To see that, simply replace your XPATH() expression with 'lt;n'::xml to see that. And here too I see no response from you... Frankly, Radosław, I get the feeling that you're not trying to understand my answers to your objections, but instead keep repeating the same assertions over and over again. Even though at least some of them, like XML being able to store arbitrary values, are simply wrong! And I'm getting pretty tired of this... So far, you also don't seem to have taken a single look at the actual implementation of the patch, even though code review is an supposed to be an integral part of the patch review process. I therefore don't believe that we're getting anywhere here. So far, you don't know if I taken a single look, your suspicious are wrong, and You try to blame me. Well, you haven't commented on the code, so assumed that you haven't look at it. May I instead assume that you did look at it, and found the patch to be in good shape, implementation-wise? All of your sentences about do not understanding I may sent to you, and blame you with your words. I think I have so far provided quite detailed responses to all of your concerns. If no, please point me to one of your concerns where I haven't either acknowledged that there is a problem, or have explained quite detailed why there is none. So
Re: [HACKERS] [COMMITTERS] pgsql: Enable CHECK constraints to be declared NOT VALID
On Jul 11, 2011, at 8:34 PM, Bruce Momjian br...@momjian.us wrote: Can we add text if the employer is _not_ the feature sponsor? I don't see that as much better. Commit messages should not be ads, IMHO. There are plenty of ways to give credit without polluting the commit log with it. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] dropping table in testcase alter_table.sql
Peter Eisentraut pete...@gmx.net writes: It has occurred to me a few times that it could be useful to clarify the approach here. If we could somehow have a separable cleanup step for every test, and eliminate interdependencies between tests, we could more easily support a number of uses cases such as creating a completely populated regression test database for playing, or running tests in random order or in differently parallelized scenarios. The limiting case of this is that each regression test script would be expected to start in an empty database and leave the DB empty on exit. I think that would make the tests less useful, not more, for several reasons: 1. They'd be slower, since every test would have to start by creating and populating some tables. 2. The final state of the regression database would no longer be useful as an environment for running ad-hoc manual tests. 3. The final state of the regression database would no longer be useful as a test case for pg_dump and pg_upgrade. The ALTER TABLE tests are particularly useful in connection with #3, because they leave around tables that have been modified in various ways. I'm not sure that the particular tables in question here are of any great value for stressing pg_dump, but in general I'd not want to see a push to make alter_table.sql clean up after itself. We could of course address all these issues in some more-formal way. But I don't think it's a good idea to say let's make the regression tests less messy without understanding that they have these additional use-cases that have to be catered for somehow. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] reducing the overhead of frequent table locks, v4
On Jul 11, 2011, at 11:45 AM, Jeff Davis pg...@j-davis.com wrote: * ... It's also possible that * we're acquiring a second or third lock type on a relation we have * already locked using the fast-path, but for now we don't worry about * that case either. */ How common is that case? There are only 16 entries in the fast path lock table, so it seems like it would frequently fill up. So, if there are common code paths that acquire different weak locks on the same relation, then we might commonly miss a fast-path opportunity. Yeah, that might be worth some more thought. I haven't been that worried about overflow of the fast path table. If you are locking more than 16 relations at once, you probably have at least 5 tables in the query, maybe more - it depends in how many indexes you have, of course. My assumption has been that at that point you're going to spend enough time planning and executing the query that the lock manager will no longer be a major bottleneck. Of course, there might be cases where that isn't so. The trade-off here is that if we don't skip the fast path when we think the table's full, we slow down lock acquisitions 17 through infinity. I was reluctant to do that. I've been operating on the theory that the fast path should exist not because it's in general better (and thus we must be certain to use it whenever possible) but because it relieves unbearable pressure in specific problematic cases (and thus outside of those cases we just need it to stay out of the way). But it's possible that this is an overly simplistic mental model and not the best trade-off in practice. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] spinlock contention
On Jul7, 2011, at 03:35 , Robert Haas wrote: Some poking around suggests that the problem isn't that spinlocks are routinely contended - it seems that we nearly always get the spinlock right off the bat. I'm wondering if the problem may be not so much that we have continuous spinlock contention, but rather than every once in a while a process gets time-sliced out while it holds a spinlock. If it's an important spinlock (like the one protecting SInvalReadLock), the system will quickly evolve into a state where every single processor is doing nothing but trying to get that spinlock. Even after the hapless lock-holder gets to run again and lets go of the lock, you have a whole pile of other backends who are sitting there firing of lock xchgb in a tight loop, and they can only get it one at a time, so you have ferocious cache line contention until the backlog clears. Pondering this some more, I came up with another idea, pretty much orthogonal to the shared counter partitioning I posted a patch for. If indeed that problem isn't spin lock contention, but rather losing control of the CPU while holding a spin lock, then why not try to get rid of the spin lock entirely? On Linux, that's easy - this is exactly what futexes are about. But it occurred to me that kernel support isn't actually needed to do that - futexes can effectively be emulated in user-space using just a semaphore, and without doing a syscall except when there's contention. The algorithm is quite straight forward, if one assumes a lock-free implementation of a queue (More on that below) LockAcquire: (1) CAS the lock state to increment the reader count or set the exclusive bit in a loop while the lock looks free. If successful, we're done, otherwise we continue with (2) (2) Add ourself to the wait queue [ Since we've added ourself to the queue, we *must* now decrement the semaphore no matter what, to keep the increment/decrement calls balanced. We're careful to maintain that invariant below. ] (3) Fence (AKA issue full memory barrier) (4) Re-check if the lock still looks taken. If it does, we decrement the semaphore (PGSemaphoreLock), and (upon wake-up) restart at (1). Otherwise, continue with (5) (5) Remove the first waiter from the queue and increment her semaphore. Rinse-and-repeat until we either removed ourself from the queue or the queue is empty. (6) Decrement our semaphore. [ (6) is necessary in the general case, see the remark below (2). But we can of course detect the case were we'd increment our own semaphore in (5) only to decrement it again in (6), and skip both operations ] LockRelease: (1) Set the lock state to 0, i.e. release the lock. (2) Fence (AKA issue full memory barrier) (3) If the lock still looks free, remove the first waiter from the queue and increment her semaphore. Rinse-and-repeat while the lock looks free and the queue is non-empty. [ From a correctness POV, we only have to wake up one waiter here, and that only if there isn't one who was been woken up but hasn't yet retried to take the lock. In reality, the decision if and how many waiter to wake up would depend on their desired lock level, and some variant of what we currently call releaseOK. ] The fencing steps (LockAcquire:3) and (LockRelease:2) guarantee that if we block in LockAcquire() a lock holder will see our queue entry and thus will wake us up eventually. Because we use a semaphore and not, say, a simple signal, we don't have to worry about the precise ordering of block and unblock operations - we just need to ensure they're balanced. Now to to enqueue() and dequeue() primitives that the algorithm above depends on. There are multiple published algorithms for lock-free queues. Some googling turned up An optimistic approach to lock-free FIFO queues, E.L. Mozes, N. Shavit, DOI 10.1007/s00446-007-0050-0 and CAS-Based Lock-Free Algorithm for Shared Deques, M.M. Michael The second one looks promising, since it only requires a single CAS to enqueue and dequeue entries in the common case. Thus, it should be just as efficient as our current spin-lock-based queue in the uncontended case (and much better in the contested case, of course). [ Our case is, in fact, simpler than the generic setting that these algorithms support. We only ever enqueue our *own* proc array entry (so allocation and re-use of queue entries isn't much of an issue), and always *process* the queue after enqueuing something - either directly in LockAcquire or later in LockRelease. We thus don't really need to support concurrent removal of queue entries, but might get away with simply skipping queue processing if we detect that somebody else is in the process of doing so. I think I have an idea for a simpler lock-less queue that fits our needs, but I haven't yet ironed
Re: [HACKERS] [COMMITTERS] pgsql: Enable CHECK constraints to be declared NOT VALID
On Tue, Jul 12, 2011 at 02:34, Bruce Momjian br...@momjian.us wrote: Simon Riggs wrote: On Sun, Jul 3, 2011 at 7:51 PM, Peter Eisentraut pete...@gmx.net wrote: On tor, 2011-06-30 at 15:09 -0400, Alvaro Herrera wrote: Robert Hass (whose name I misspelled in the commit message above) just mentioned to me (in an answer to my apologizing about it) that he didn't think that mentioning sponsors for patch development was a good idea. I don't think we have a policy for this, but I have done it for some time now and nobody has complained, so I sort of assumed it was okay. Besides, some of the people pouring the money in does care about it; moreover, it provides a little incentive for other companies that might also be in a position to fund development but lack the peer approval of the idea, or a final little push. I think commit messages should be restricted to describing what was changed and who is responsible for it. ?Once we open it for things like sponsorship, what's to stop people from adding personal messages, what they had for breakfast, currently listening to, or just selling advertising space in each commit message for 99 cents? Agreed. We should credit people somewhere, but not here. Otherwise, we'll be forced to add Sponsored by RedHat, Sponsored by 2ndQuadrant etc onto commit messages. Agreed. On one level I like the sponsor message, but on the other having Sponsored by RedHat on every Tom Lane item will get tiring. ;-) Can we add text if the employer is _not_ the feature sponsor? That would be quite unfair to those who *do* employ committers Basically you'd get credit only if you didn't employ a committer. This all becomes much easier if we keep the ads out of the commit messages, and stick to the technical side there. And find another venue for the other credit. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [COMMITTERS] pgsql: Enable CHECK constraints to be declared NOT VALID
Excerpts from Magnus Hagander's message of mar jul 12 09:34:56 -0400 2011: Agreed. On one level I like the sponsor message, but on the other having Sponsored by RedHat on every Tom Lane item will get tiring. ;-) Can we add text if the employer is _not_ the feature sponsor? That would be quite unfair to those who *do* employ committers Basically you'd get credit only if you didn't employ a committer. Well, that has worked well for my case -- I haven't ever credited my employer, only those that have specifically hired us for a particular patch. My employer gets a lot of credit in the form of email signatures, like the one below ;-) But I see your point and I will stick to whatever policy we come up with (assuming we come up with one). This all becomes much easier if we keep the ads out of the commit messages, and stick to the technical side there. And find another venue for the other credit. I'm open to ideas. -- Álvaro Herrera alvhe...@commandprompt.com The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] per-column generic option
Excerpts from Shigeru Hanada's message of mar jul 12 03:11:54 -0400 2011: (2011/07/12 0:44), Peter Eisentraut wrote: On lör, 2011-07-09 at 23:49 -0400, Alvaro Herrera wrote: The new ALTER TABLE grammar seems a bit strange -- ADD, SET, DROP. Is this defined by the SQL/MED standard? It seems at odds with our handling of attoptions Well, I believe the SQL/MED options were actually implemented first and the attoptions afterwards. But it's probably not unwise to keep them separate, even though the syntaxes could have been made more similar. As you say, syntax for attoptions/reloptions seem to satisfy the requirement of SQL/MED; SET for ADD/SET and RESET for DROP. Speaking of which -- what's the difference between ADD and SET for SQL/MED options? -- Álvaro Herrera alvhe...@commandprompt.com The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [COMMITTERS] pgsql: Enable CHECK constraints to be declared NOT VALID
Alvaro Herrera wrote: Excerpts from Magnus Hagander's message of mar jul 12 09:34:56 -0400 2011: Agreed. ??On one level I like the sponsor message, but on the other having Sponsored by RedHat on every Tom Lane item will get tiring. ;-) Can we add text if the employer is _not_ the feature sponsor? That would be quite unfair to those who *do* employ committers Basically you'd get credit only if you didn't employ a committer. Well, that has worked well for my case -- I haven't ever credited my employer, only those that have specifically hired us for a particular patch. My employer gets a lot of credit in the form of email signatures, like the one below ;-) But I see your point and I will stick to whatever policy we come up with (assuming we come up with one). This all becomes much easier if we keep the ads out of the commit messages, and stick to the technical side there. And find another venue for the other credit. I'm open to ideas. Agreed. I am not firm either way on the issue; I was just throwing out a suggestion. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] marking old branches as no longer maintained
On Tue, Jul 12, 2011 at 01:10, Andrew Dunstan and...@dunslane.net wrote: On 07/11/2011 07:59 PM, Bruce Momjian wrote: Andrew Dunstan wrote: On 06/28/2011 05:31 PM, Peter Eisentraut wrote: On tis, 2011-06-28 at 17:05 -0400, Andrew Dunstan wrote: Couldn't you just put a text file on the build farm server with recommended branches? As I told Magnus, that gets ugly because of limitations in MinGW's SDK perl. I suppose I could just not implement the feature for MinGW, but I've tried damn hard not to make those sorts of compromises and I'm not keen to start. The buildfarm code can upload the build result via HTTP; why can't it download a file via HTTP? It has to use a separate script to do that. I don't really want to add another one just for this. (thinks a bit) I suppose I can make it do: my $url = http://buildfarm.postgresql.org/branches_of_interest.txt;; my $branches_of_interest = `perl -MLWP::Simple -e getprint(q{$url})`; Maybe that's the best option. It's certainly going to be less code than anything else :-) Could you pull the list of active branches from our web site HTML? I can, but I'm not that keen on having to do web scraping. Currently my test machine (crake) is using the above scheme and it's working fine. It's not a huge burden to maintain, after all. You don't actually need to resort to web scraping - it's available as well formatted xml (http://www.postgresql.org/versions.rss). That said, I agree that it's not a huge burden, and probably a better idea, to do it your current way. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] TODO list updated
Excerpts from Bruce Momjian's message of lun jul 11 18:58:35 -0400 2011: I have updated the TODO wiki to remove the 9.1-completed items: http://wiki.postgresql.org/wiki/Todo This will allow us to now mark 9.2-completed items. I have created TodoDone91 from the items marked TodoDone on the rev before your edit, for reference. We now have those pages for 8.4 through 9.1. (Sorry if this is a dupe -- I thought I had sent this yesterday but I don't see it anywhere) -- Álvaro Herrera alvhe...@commandprompt.com The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Review of VS 2010 support patches
On Sun, Jul 10, 2011 at 20:46, Brar Piening b...@gmx.de wrote: Sorry for the late response - I've been on a wedding this weekend. Something is strange here. Did you run perltidy with the exact parameters documented in the README file? Yes - I usually even copy paste it from the README as perltidy -b -bl -nsfs -naws -l=100 -ole=unix *.pl *.pm (pasted once more) is hard to remember and takes a while to type. Bleh, that's annoying - that means it behaves different in different versions :S If so, perltidy seems to be version- or platform- dependent. I ran it, and got a slightly different patch. It's not big differences, but the simple fact that perltidy doesn't always generate the same result is annoying. Can you run it again, and make sure you get the exact same diff? So that it wasn't accidentally run off the wrong version or something? I just rechecked that applying my two patches vs. applying my two patches + running the above perltidy command gives no difference (0 byte patch). I've attached the differences between your perltidy and my perltidy run. I'm using (perltidy -v): This is perltidy, v20090616 I'm currently using (perl -v): This is perl 5, version 14, subversion 1 (v5.14.1) built for MSWin32-x64-multi-thread and (perltidy -v): This is perltidy, v20101217 But I've just recently upgraded to the latest Perl version. The patch has been produced using some 5.12.? ActivePerl and it's corresponding perltidy version which (whatever it was) obviously produced the same result for me. I'm using 5.10... Not sure if it's the perl version or more likely the perltidy version that causes the difference, but there's not too much we can do about that. I'm not sure the differences are big enough that we actually want to care about it - I think it's easier to just take changes caused by it out of each commit. We're still getting the large majority as the same. So - for now, I have made a perltidy run and committed it, which should make it slightly easier for reviewing the actual patch :-) -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Allow pg_archivecleanup to ignore extensions
On Sun, Jul 10, 2011 at 7:13 PM, Josh Berkus j...@agliodbs.com wrote: This patch[1] is for some reason marked waiting on Author. But I can't find that there's been any review of it searching the list. What's going on with it? Has it been reviewed? Yes, I reviewed it on list. Some minor changes were discussed. I'm with Greg now, so we'll discuss and handle it. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] TODO list updated
Alvaro Herrera wrote: Excerpts from Bruce Momjian's message of lun jul 11 18:58:35 -0400 2011: I have updated the TODO wiki to remove the 9.1-completed items: http://wiki.postgresql.org/wiki/Todo This will allow us to now mark 9.2-completed items. I have created TodoDone91 from the items marked TodoDone on the rev before your edit, for reference. We now have those pages for 8.4 through 9.1. (Sorry if this is a dupe -- I thought I had sent this yesterday but I don't see it anywhere) I thought about doing that but wasn't sure anyone wanted it so I skipped it. Thanks. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] remove README.mb.jp and README.mb.big5?
On Mon, Jul 11, 2011 at 16:58, Peter Eisentraut pete...@gmx.net wrote: These files are last updated 2001 or 2002 and I'm pretty sure they are outdated. It looks like no one is maintaining them, so we should remove them. +1. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] spurious use of %m format in pg_upgrade
Peter Eisentraut wrote: pg_upgrade's pg_scandir_internal() makes use of the non-standard %m format: pg_log(PG_FATAL, could not open directory \%s\: %m\n, dirname); Is this an oversight, or is there an undocumented assumption that this code will only be used on platforms where %m works? (Which platforms don't have scandir() anyway?) Yes, surely an oversight, and I see it has been removed --- good. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
On 07/03/2011 11:54 AM, Peter Eisentraut wrote: On sön, 2011-07-03 at 13:42 -0500, Michael Gould wrote: I would like to request that full support for the UUID data type can added. I think that even though there is a contrib module, since this is a standard datatype that Postgres ought to be the one actually assigning the value. What difference would that make? In 9.1, you can easily load the required extension, and there'd be no difference from a built-in variant. It is about usability folks. -- Command Prompt, Inc. - http://www.commandprompt.com/ PostgreSQL Support, Training, Professional Services and Development The PostgreSQL Conference - http://www.postgresqlconference.org/ @cmdpromptinc - @postgresconf - 509-416-6579 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
On 07/12/2011 12:03 PM, Joshua D. Drake wrote: On 07/03/2011 11:54 AM, Peter Eisentraut wrote: On sön, 2011-07-03 at 13:42 -0500, Michael Gould wrote: I would like to request that full support for the UUID data type can added. I think that even though there is a contrib module, since this is a standard datatype that Postgres ought to be the one actually assigning the value. What difference would that make? In 9.1, you can easily load the required extension, and there'd be no difference from a built-in variant. It is about usability folks. What about extensions makes them less usable? cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [COMMITTERS] pgsql: Enable CHECK constraints to be declared NOT VALID
On 07/12/2011 06:54 AM, Alvaro Herrera wrote: Excerpts from Magnus Hagander's message of mar jul 12 09:34:56 -0400 2011: Agreed. On one level I like the sponsor message, but on the other having Sponsored by RedHat on every Tom Lane item will get tiring. ;-) Create a macro ;) Can we add text if the employer is _not_ the feature sponsor? That would be quite unfair to those who *do* employ committers Basically you'd get credit only if you didn't employ a committer. Well, that has worked well for my case -- I haven't ever credited my employer, only those that have specifically hired us for a particular patch. My employer gets a lot of credit in the form of email signatures, like the one below ;-) Yeah it depends on the committer. CMD gets credit through @commandprompt.com, the sig file and a host of other areas but Tom uses his personal information, so... But I see your point and I will stick to whatever policy we come up with (assuming we come up with one). This all becomes much easier if we keep the ads out of the commit messages, and stick to the technical side there. And find another venue for the other credit. I'm open to ideas. I think the commit log isn't actually useful for the advertising portion of this. Users don't read commit logs for the most part. However, it is an easy way for people who are writing release notes, press releases, etc... to find the information. Is it a good place for the information? No. Is it the easiest place to store it until somebody steps up and creates a proper way to track it so that it can be desimnated properly throughout the community? Probably. We do need a way to track this information. JD -- Command Prompt, Inc. - http://www.commandprompt.com/ PostgreSQL Support, Training, Professional Services and Development The PostgreSQL Conference - http://www.postgresqlconference.org/ @cmdpromptinc - @postgresconf - 509-416-6579 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Deferred partial/expression unique constraints
Hi, I guess $subject wasn't implemented because plain unique indexes aren't represented in pg_constraint and thus do not have a place to store information about being deferred? Other than that I do not see any special complications in implementing it? Is there any reasons not to store unique indexes in pg_constraint in the future? Greetings, Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] reducing the overhead of frequent table locks, v4
On Tue, 2011-07-12 at 07:55 -0500, Robert Haas wrote: I haven't been that worried about overflow of the fast path table. If you are locking more than 16 relations at once, you probably have at least 5 tables in the query, maybe more - it depends in how many indexes you have, of course. My assumption has been that at that point you're going to spend enough time planning and executing the query that the lock manager will no longer be a major bottleneck. Of course, there might be cases where that isn't so. Yeah, I think you're right here. It's probably not much of a practical concern. I was slightly bothered because it seemed a little unpredictable. But it seems very minor, and if we wanted to fix it later I think we could. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Patch Review: Bugfix for XPATH() if text or attribute nodes are selected
Radoslaw, For me this discussion is over. I putted my objections and suggestions. Full review is available in archives, and why to not escape is putted in review of your 2nd patch, about scalar values. Did you install and test the functionality of the patch? I can't tell from your review whether you got that far. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Patch Review: Bugfix for XPATH() if text or attribute nodes are selected
Florian, Radoslaw, Please both of you calm down. Florian is trying to improve our XML type. Radoslaw is trying to help out by reviewing it. It's not a benefit to anyone for you two to get into an argument about who said what ... especially if the argument is based on (as far as I can see) not understanding what the other person was saying. Answering What did you mean by X? Did you mean Y, or something else? is much more friendly than saying You couldn't possibly mean X or You don't understand X. Consider that *both* of you are exchanging emails in a language which is native to neither of you. This goes for all discussions on -hackers, but your recent conversation is a good example of unnecessary argument. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Allow pg_archivecleanup to ignore extensions
On 7/12/11 7:38 AM, Simon Riggs wrote: On Sun, Jul 10, 2011 at 7:13 PM, Josh Berkus j...@agliodbs.com wrote: This patch[1] is for some reason marked waiting on Author. But I can't find that there's been any review of it searching the list. What's going on with it? Has it been reviewed? Yes, I reviewed it on list. Some minor changes were discussed. I'm with Greg now, so we'll discuss and handle it. I couldn't find the review searching the archives. Can you please link it in the Commitfest application? Thanks. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
Magnus, JD, UUID *is* in core. It's just the generation functions that aren't. No, it's not. It's in /contrib, which makes it an extension. Uh UUID/GUID is used pervasively throughout enterprise apps, especially Java apps. Oh, I guess I encounter it a lot less than you. Time for a survey? -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Deferred partial/expression unique constraints
On 7/12/11 9:46 AM, Andres Freund wrote: Hi, I guess $subject wasn't implemented because plain unique indexes aren't represented in pg_constraint and thus do not have a place to store information about being deferred? Other than that I do not see any special complications in implementing it? Um, I thought that deferrable unique constraints were a 9.0 feature, no? -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
On 12 July 2011 19:24, Josh Berkus j...@agliodbs.com wrote: Magnus, JD, UUID *is* in core. It's just the generation functions that aren't. No, it's not. It's in /contrib, which makes it an extension. The functions to produce UUIDs are in contrib, but the UUID data type itself is in core. You get the type uuid whether you install the contrib module or not. http://www.postgresql.org/docs/current/static/datatype-uuid.html -- Thom Brown Twitter: @darkixion IRC (freenode): dark_ixion Registered Linux user: #516935 EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] reducing the overhead of frequent table locks, v4
On Jul 12, 2011, at 12:02 PM, Jeff Davis pg...@j-davis.com wrote: Yeah, I think you're right here. It's probably not much of a practical concern. I was slightly bothered because it seemed a little unpredictable. But it seems very minor, and if we wanted to fix it later I think we could. Yes, I agree. I think there are a number of things we could possibly fine-tune, but it's not clear to me just yet which ones are really problems or what the right solutions are. I think once the basic patch is in and people start beating on it we'll get a better feeling for which parts can benefit from further engineering. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
On Jul 12, 2011, at 1:24 PM, Josh Berkus j...@agliodbs.com wrote: Magnus, JD, UUID *is* in core. It's just the generation functions that aren't. No, it's not. It's in /contrib, which makes it an extension. Uh UUID/GUID is used pervasively throughout enterprise apps, especially Java apps. Oh, I guess I encounter it a lot less than you. Time for a survey? How about we just leave it alone? I think this is a solution in search of a problem. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Arrays of Records in PL/Perl
Hackers, Given this script: BEGIN; CREATE TYPE foo AS ( this int, that int ); CREATE OR REPLACE FUNCTION dump(foo[]) returns text language plperlu AS $$ use Data::Dumper; Dumper shift; $$; CREATE OR REPLACE FUNCTION dump(foo) returns text language plperlu AS $$ use Data::Dumper; Dumper shift; $$; select dump(row(3, 5)::foo); select dump(ARRAY[row(3, 5)::foo]); ROLLBACK; The output is: dump -- $VAR1 = { + 'that' = '5',+ 'this' = '3' + }; + (1 row) Time: 0.936 ms dump -- $VAR1 = '{(3,5)}';+ (1 row) That is, if a record is passed to a PL/Perl function, it's correctly converted into a hash. If, however, an array of records are passed, the record are stringified, rather than turned into hashes. This seems inconsistent. Bug? Thanks, David -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
Thom, The functions to produce UUIDs are in contrib, but the UUID data type itself is in core. You get the type uuid whether you install the contrib module or not. http://www.postgresql.org/docs/current/static/datatype-uuid.html Oh! I guess that shows you how much I use the type then. Well, in that case, this whole discussion is moot unless someone is offering to write us a UUID generator from scratch. Is someone doing so? -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Deferred partial/expression unique constraints
On 12 July 2011 19:26, Josh Berkus j...@agliodbs.com wrote: On 7/12/11 9:46 AM, Andres Freund wrote: Hi, I guess $subject wasn't implemented because plain unique indexes aren't represented in pg_constraint and thus do not have a place to store information about being deferred? Other than that I do not see any special complications in implementing it? Um, I thought that deferrable unique constraints were a 9.0 feature, no? Yes, but there is no syntax to create a unique constraint on an expression, and hence to create a deferrable unique expression check. However, that doesn't seem like such a serious limitation, because the same functionality can be achieved using an exclusion constraint with the equality operator. Regards, Dean -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
On 07/12/2011 11:56 AM, Josh Berkus wrote: Thom, The functions to produce UUIDs are in contrib, but the UUID data type itself is in core. You get the type uuid whether you install the contrib module or not. http://www.postgresql.org/docs/current/static/datatype-uuid.html Oh! I guess that shows you how much I use the type then. Well, in that case, this whole discussion is moot unless someone is offering to write us a UUID generator from scratch. Is someone doing so? I am reaching back into my mental archives for when we first decided to implement a UUID datatype. As I recall we purposely did not include the generators in core because every language already has their own generators, popular languages anyway. I think we should just leave it as be, note to use your native language generator OR use the contributed modules. JD -- Command Prompt, Inc. - http://www.commandprompt.com/ PostgreSQL Support, Training, Professional Services and Development The PostgreSQL Conference - http://www.postgresqlconference.org/ @cmdpromptinc - @postgresconf - 509-416-6579 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Arrays of Records in PL/Perl
On Tue, Jul 12, 2011 at 12:45, David E. Wheeler da...@kineticode.com wrote: Hackers, That is, if a record is passed to a PL/Perl function, it's correctly converted into a hash. If, however, an array of records are passed, the record are stringified, rather than turned into hashes. This seems inconsistent. Bug? All Arrays in 9.0 and lower are strings, regardless of if they are comprised of composite types. Its not so much a bug as a limitation. Alexey Klyukin fixed this for 9.1 :-) [ In 9.1 we could not make them straight perl arrayrefs as we needed to keep the string representation for backwards compatibility. What we did in 9.1 is overload the arrayref and string operations on blessed object so you can treat it as either. ] -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] reducing the overhead of frequent table locks, v4
On Tue, 2011-07-12 at 13:32 -0500, Robert Haas wrote: On Jul 12, 2011, at 12:02 PM, Jeff Davis pg...@j-davis.com wrote: Yeah, I think you're right here. It's probably not much of a practical concern. I was slightly bothered because it seemed a little unpredictable. But it seems very minor, and if we wanted to fix it later I think we could. Yes, I agree. I think there are a number of things we could possibly fine-tune, but it's not clear to me just yet which ones are really problems or what the right solutions are. I think once the basic patch is in and people start beating on it we'll get a better feeling for which parts can benefit from further engineering. OK, marking ready for committer assuming that you will take care of my previous complaints (the biggest one is that holdsStrongLockCount should be boolean). Disclaimer: I have done no performance review at all, even though this is a performance patch! I like the patch and I like the approach. It seems like the potential benefits are worth the extra complexity, which seems manageable and mostly isolated to lock.c. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
On 07/12/2011 09:15 AM, Andrew Dunstan wrote: On 07/12/2011 12:03 PM, Joshua D. Drake wrote: On 07/03/2011 11:54 AM, Peter Eisentraut wrote: On sön, 2011-07-03 at 13:42 -0500, Michael Gould wrote: I would like to request that full support for the UUID data type can added. I think that even though there is a contrib module, since this is a standard datatype that Postgres ought to be the one actually assigning the value. What difference would that make? In 9.1, you can easily load the required extension, and there'd be no difference from a built-in variant. It is about usability folks. What about extensions makes them less usable? It is an extra step, that is less usable. Does it matter? Shrug, I know I hate having to type apt-get just to use xyz, does it mean it is a big deal? Probably not. -- Command Prompt, Inc. - http://www.commandprompt.com/ PostgreSQL Support, Training, Professional Services and Development The PostgreSQL Conference - http://www.postgresqlconference.org/ @cmdpromptinc - @postgresconf - 509-416-6579 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
On 07/12/2011 03:44 PM, Joshua D. Drake wrote: What about extensions makes them less usable? It is an extra step, that is less usable. Does it matter? Shrug, I know I hate having to type apt-get just to use xyz, does it mean it is a big deal? Probably not. By that argument we wouldn't have any extensions at all, just a monolithic product. I don't think that would be an advance. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
On Tue, Jul 12, 2011 at 04:29:33PM -0400, Andrew Dunstan wrote: On 07/12/2011 03:44 PM, Joshua D. Drake wrote: What about extensions makes them less usable? It is an extra step, that is less usable. Does it matter? Shrug, I know I hate having to type apt-get just to use xyz, does it mean it is a big deal? Probably not. By that argument we wouldn't have any extensions at all, just a monolithic product. I don't think that would be an advance. cheers andrew For me, the criteria I like to use for core functionality are: 1. It is available with a common definition from a number of DB products. With a UUID, it's size/structure is predefined and this allows a dump from another SQL product to be loaded into a PostgreSQL DB. 2. It would benefit from the tighter integration with the core DB for either performance or development use. 3. It is a feature where the extra step is an unexpected nuisance. That is why I think having the UUID generators be a contrib module is the correct place for them to be, but the UUID type is better as a core function. Regards, Ken -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Single pass vacuum - take 1
Hi All, As per discussion here http://archives.postgresql.org/pgsql-hackers/2011-05/msg01119.php PFA a patch which implements the idea with some variation. At the start of the first pass, we remember the current LSN. Every page that needs some work is HOT-pruned so that dead tuples are truncated to dead line pointers. We collect those dead line pointers and mark them as dead-vacuumed. Since we don't have any LP flag bits available, we instead just use the LP_DEAD flag along with offset value 1 to mark the line pointer as dead-vacuumed. The page is defragmented and we store the LSN remembered at the start of the pass in the page special area as vacuum LSN. We also update the free space at that point because we are not going to do a second pass on the page anymore. Once we collect all dead line pointers and mark them as dead-vacuumed, we clean-up the indexes and remove all index pointers pointing to those dead-vacuumed line pointers. If the index vacuum finishes successfully, we store the LSN in the pg_class row of the table (needs catalog changes). At that point, we are certain that there are no index pointers pointing to dead-vacuumed line pointers and they can be reclaimed at the next opportunity. During normal operations or subsequent vacuum, if the page is chosen for HOT-prunung, we check if has any dead-vacuumed line pointers and if the vacuum LSN stored on the page special area is equal to the one stored in the pg_class row, and reclaim those dead-vacuum line pointers (the index pointers to these line pointers are already taken care of). If the pg_class LSN is not the same, the last vacuum probably did not finish completely and we collect the dead-vacuum line pointers just like other dead line pointers and try to clean up the index pointers as usual. I ran few pgbench tests with the patch. I don't see much difference in the overall tps, but the vacuum time for the accounts table reduces by nearly 50%. I neither see much difference in the overall bloat, but then pgbench uses HOT very nicely and the accounts table got only couple of vacuum cycles in my 7-8 hour run. There are couple of things that probably need more attention. I am not sure if we need to teach ANALYZE to treat dead line pointers differently. Since they take up much less space than a dead tuple, they should definitely have a lower weight, but at the same time, we need to take into account the number of indexes on the table. The start of first pass LSN that we are remembering is in fact the start of the WAL page and I think there could be some issues with that, especially for very tiny tables. For example, first vacuum may run completely. If another vacuum is started on the same table and say it gets the same LSN (because we did not write more than 1 page worth WAL in between) and if the second vacuum aborts after it cleaned up few pages, we might get into some trouble. The likelihood of such things happening is very small, but may be its worth taking care of it. May be we can get the exact current LSN and not store it in the pg_class if we don't do anything during the cycle. Comments ? Thanks, Pavan -- Pavan Deolasee EnterpriseDB http://www.enterprisedb.com diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index 01a492e..12918d2 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -3912,7 +3912,8 @@ log_heap_clean(Relation reln, Buffer buffer, OffsetNumber *redirected, int nredirected, OffsetNumber *nowdead, int ndead, OffsetNumber *nowunused, int nunused, - TransactionId latestRemovedXid) + TransactionId latestRemovedXid, + bool hasvaclsn, XLogRecPtr indexvaclsn) { xl_heap_clean xlrec; uint8 info; @@ -3927,6 +3928,8 @@ log_heap_clean(Relation reln, Buffer buffer, xlrec.latestRemovedXid = latestRemovedXid; xlrec.nredirected = nredirected; xlrec.ndead = ndead; + xlrec.hasvaclsn = hasvaclsn; + xlrec.indexvaclsn = indexvaclsn; rdata[0].data = (char *) xlrec; rdata[0].len = SizeOfHeapClean; @@ -4196,6 +4199,8 @@ heap_xlog_clean(XLogRecPtr lsn, XLogRecord *record) int ndead; int nunused; Size freespace; + bool hasvaclsn; + XLogRecPtr indexvaclsn; /* * We're about to remove tuples. In Hot Standby mode, ensure that there's @@ -4228,6 +4233,8 @@ heap_xlog_clean(XLogRecPtr lsn, XLogRecord *record) nredirected = xlrec-nredirected; ndead = xlrec-ndead; + hasvaclsn = xlrec-hasvaclsn; + indexvaclsn = xlrec-indexvaclsn; end = (OffsetNumber *) ((char *) xlrec + record-xl_len); redirected = (OffsetNumber *) ((char *) xlrec + SizeOfHeapClean); nowdead = redirected + (nredirected * 2); @@ -4239,7 +4246,8 @@ heap_xlog_clean(XLogRecPtr lsn, XLogRecord *record) heap_page_prune_execute(buffer, redirected, nredirected, nowdead, ndead, - nowunused, nunused); + nowunused, nunused, + hasvaclsn, indexvaclsn); freespace = PageGetHeapFreeSpace(page); /* needed
[HACKERS] pgmail html
Hi everybody: Does anybody has an example to send an email in html format using pgmail. Best Regards, Fernando Acosta
Re: [HACKERS] dropping table in testcase alter_table.sql
On tis, 2011-07-12 at 08:51 -0400, Tom Lane wrote: Peter Eisentraut pete...@gmx.net writes: It has occurred to me a few times that it could be useful to clarify the approach here. If we could somehow have a separable cleanup step for every test, and eliminate interdependencies between tests, we could more easily support a number of uses cases such as creating a completely populated regression test database for playing, or running tests in random order or in differently parallelized scenarios. The limiting case of this is that each regression test script would be expected to start in an empty database and leave the DB empty on exit. I think that would make the tests less useful, not more, for several reasons: 1. They'd be slower, since every test would have to start by creating and populating some tables. 2. The final state of the regression database would no longer be useful as an environment for running ad-hoc manual tests. 3. The final state of the regression database would no longer be useful as a test case for pg_dump and pg_upgrade. I think you misunderstood what I was saying. I wanted take out the cleanup parts out of all test cases and make it a choice whether to run them. Right now we have a lot of test cases that clean up after themselves, which is useful in some cases (testing the cleaning, for one thing), but not useful for 2. and 3. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] per-column generic option
On tis, 2011-07-12 at 09:56 -0400, Alvaro Herrera wrote: Speaking of which -- what's the difference between ADD and SET for SQL/MED options? ADD add to the existing options, SET overwrites all options with what you specify. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] isolation tests are not being run in buildfarm
Hi, I think we're not running the isolation test suite. I noticed that the stage seemed to take too little time (zero seconds in fact), and looking into the stage logs only says 'make check' is not supported. Install PostgreSQL, then 'make installcheck' instead. http://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=anchovydt=2011-07-12%2021%3A23%3A01stg=isolation-check This should probably be fixed. [ ... looks some more ... ] Oh, it seems inconsistent. Several hosts do not run it at all; chinchilla and anchovy are running the wrong make target; but at least chough seems to be doing it right. Crake is good too. http://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=choughdt=2011-07-12%2016%3A30%3A03stg=isolation-check http://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crakedt=2011-07-12%2015%3A32%3A02stg=isolation-check Apparently the animals (err, birds) run by a certain Andrew Dunstan are working correctly. I wonder why that might be. -- Álvaro Herrera alvhe...@alvh.no-ip.org -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] isolation tests are not being run in buildfarm
On 07/12/2011 05:42 PM, Alvaro Herrera wrote: Hi, I think we're not running the isolation test suite. I noticed that the stage seemed to take too little time (zero seconds in fact), and looking into the stage logs only says 'make check' is not supported. Install PostgreSQL, then 'make installcheck' instead. http://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=anchovydt=2011-07-12%2021%3A23%3A01stg=isolation-check This should probably be fixed. [ ... looks some more ... ] Oh, it seems inconsistent. Several hosts do not run it at all; chinchilla and anchovy are running the wrong make target; but at least chough seems to be doing it right. Crake is good too. http://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=choughdt=2011-07-12%2016%3A30%3A03stg=isolation-check http://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crakedt=2011-07-12%2015%3A32%3A02stg=isolation-check Apparently the animals (err, birds) run by a certain Andrew Dunstan are working correctly. I wonder why that might be. See http://archives.postgresql.org/pgsql-hackers/2011-05/msg00399.php for why it changed. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] per-column generic option
(2011/07/12 22:56), Alvaro Herrera wrote: Speaking of which -- what's the difference between ADD and SET for SQL/MED options? ADD can only add new option; it can't overwrite existing option's value. To overwrite existing option's value, you need to use SET instead. Regards, -- Shigeru Hanada -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] FOR KEY LOCK foreign keys
Excerpts from Noah Misch's message of vie mar 11 12:51:14 -0300 2011: On Fri, Feb 11, 2011 at 02:13:22AM -0500, Noah Misch wrote: Automated tests would go a long way toward building confidence that this patch does the right thing. Thanks to the SSI patch, we now have an in-tree test framework for testing interleaved transactions. The only thing it needs to be suitable for this work is a way to handle blocked commands. If you like, I can try to whip something up for that. [off-list ACK followed] Here's a patch implementing that. It applies to master, with or without your KEY LOCK patch also applied, though the expected outputs reflect the improvements from your patch. I add three isolation test specs: fk-contention: blocking-only test case from your blog post fk-deadlock: the deadlocking test case I used during patch review fk-deadlock2: Joel Jacobson's deadlocking test case Thanks for this patch. I have applied it, adjusting the expected output of these tests to the HEAD code. I'll adjust it when I commit the fklocks patch, I guess, but it seemed simpler to have it out of the way; besides it might end up benefitting other people who might be messing with the locking code. I only support one waiting command at a time. As long as one commands continues to wait, I run other commands to completion synchronously. Should be fine for now, I guess. I think this will work on Windows as well as pgbench does, but I haven't verified that. We will find out shortly. -- Álvaro Herrera alvhe...@commandprompt.com The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Arrays of Records in PL/Perl
On Jul 12, 2011, at 12:19 PM, Alex Hunsaker wrote: All Arrays in 9.0 and lower are strings, regardless of if they are comprised of composite types. Its not so much a bug as a limitation. Alexey Klyukin fixed this for 9.1 :-) Oh? dump -- $VAR1 = { + 'that' = '5',+ 'this' = '3' + }; + (1 row) Time: 2.016 ms dump $VAR1 = bless( { + 'array' = [ + { + 'that' = '5',+ 'this' = '3' + } + ],+ 'typeoid' = 16457 + }, 'PostgreSQL::InServer::ARRAY' );+ (1 row) Woo! Thanks! [ In 9.1 we could not make them straight perl arrayrefs as we needed to keep the string representation for backwards compatibility. What we did in 9.1 is overload the arrayref and string operations on blessed object so you can treat it as either. ] Yep, awesome. Best, David -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] spinlock contention
On Jul 12, 2011, at 8:03 AM, Florian Pflug f...@phlo.org wrote: The algorithm is quite straight forward, if one assumes a lock-free implementation of a queue (More on that below) This is similar to the CAS-based LWLocks I played around with, though I didn't use a lock-free queue. I think I probably need to devote some time to figuring out why that didn't work out to a win... ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] interval precision oddness
When you create a column with a plain interval column, the typmod is set to -1 and the information schema reports this as 6, because that's what the internal default value is (see _pg_datetime_precision function). But when you create a column such as interval year to month), the typmod is actually the bit encoding of year to month in the higher 16 bits and 65535 in the lower 16 bits, and so the information schema reports the precision as 65535, whereas the actual behavior still corresponds to a precision of 6. I guess this could be seen as a reporting issue. We could adjust _pg_datetime_precision to map 65535 to 6, just like -1 is mapped to 6. Or is there anything else wrong here? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] interval precision oddness
Peter Eisentraut pete...@gmx.net writes: When you create a column with a plain interval column, the typmod is set to -1 and the information schema reports this as 6, because that's what the internal default value is (see _pg_datetime_precision function). But when you create a column such as interval year to month), the typmod is actually the bit encoding of year to month in the higher 16 bits and 65535 in the lower 16 bits, and so the information schema reports the precision as 65535, whereas the actual behavior still corresponds to a precision of 6. I guess this could be seen as a reporting issue. We could adjust _pg_datetime_precision to map 65535 to 6, just like -1 is mapped to 6. Or is there anything else wrong here? No, it sounds like the information_schema function didn't get the memo about what that meant. See INTERVAL_FULL_RANGE, INTERVAL_FULL_PRECISION macros and usage thereof, esp. intervaltypmodout. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
On 07/12/2011 01:29 PM, Andrew Dunstan wrote: On 07/12/2011 03:44 PM, Joshua D. Drake wrote: What about extensions makes them less usable? It is an extra step, that is less usable. Does it matter? Shrug, I know I hate having to type apt-get just to use xyz, does it mean it is a big deal? Probably not. By that argument we wouldn't have any extensions at all, just a monolithic product. I don't think that would be an advance. By that argument, with a condition of what we are talking about. I think what this boils down to is we look at what our competitors are doing. If we were to change anything at all. cheers andrew -- Command Prompt, Inc. - http://www.commandprompt.com/ PostgreSQL Support, Training, Professional Services and Development The PostgreSQL Conference - http://www.postgresqlconference.org/ @cmdpromptinc - @postgresconf - 509-416-6579 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
On Jul 12, 2011, at 1:40 PM, k...@rice.edu wrote: That is why I think having the UUID generators be a contrib module is the correct place for them to be, but the UUID type is better as a core function. I'm okay with this, though given the fact that ftp.ossp.org has been down for *months*, I'm inclined to think that we ought to include it in the contrib distribution for easy linking. Best, David -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
David, I'm okay with this, though given the fact that ftp.ossp.org has been down for *months*, I'm inclined to think that we ought to include it in the contrib distribution for easy linking. What license is it under? -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
David E. Wheeler da...@kineticode.com writes: On Jul 12, 2011, at 1:40 PM, k...@rice.edu wrote: That is why I think having the UUID generators be a contrib module is the correct place for them to be, but the UUID type is better as a core function. I'm okay with this, though given the fact that ftp.ossp.org has been down for *months*, Curious considering that the machine is there (responds to ping), and the ossp.org webserver works fine. Has anyone bugged the owner about that? I'm inclined to think that we ought to include it in the contrib distribution for easy linking. I'm disinclined to do anything that might amount to forking the library, or even looking like we wanted to take responsibility for it. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Deferred partial/expression unique constraints
On Tuesday, July 12, 2011 08:57:44 PM Dean Rasheed wrote: On 12 July 2011 19:26, Josh Berkus j...@agliodbs.com wrote: On 7/12/11 9:46 AM, Andres Freund wrote: Hi, I guess $subject wasn't implemented because plain unique indexes aren't represented in pg_constraint and thus do not have a place to store information about being deferred? Other than that I do not see any special complications in implementing it? Um, I thought that deferrable unique constraints were a 9.0 feature, no? Yes, but there is no syntax to create a unique constraint on an expression, and hence to create a deferrable unique expression check. However, that doesn't seem like such a serious limitation, because the same functionality can be achieved using an exclusion constraint with the equality operator. That doesn't solve the issue of a partial index, right? Also I find it that intuitive to package a expression inside an operator (which needs to be complicated enough not to be accidentally used and still be expressive...). Especially if that expression involves more than one column (which isn't that hard to imagine). Thanks, Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Tweaking the planner's heuristics for small/empty tables
There's a thread over in pgsql-performance http://archives.postgresql.org/pgsql-performance/2011-07/msg00046.php in which the main conclusion was that we need to take a fresh look at the heuristics the planner uses when dealing with small or empty relations. The code in question is in estimate_rel_size() in plancat.c: curpages = RelationGetNumberOfBlocks(rel); /* * HACK: if the relation has never yet been vacuumed, use a * minimum estimate of 10 pages. This emulates a desirable aspect * of pre-8.0 behavior, which is that we wouldn't assume a newly * created relation is really small, which saves us from making * really bad plans during initial data loading. (The plans are * not wrong when they are made, but if they are cached and used * again after the table has grown a lot, they are bad.) It would * be better to force replanning if the table size has changed a * lot since the plan was made ... but we don't currently have any * infrastructure for redoing cached plans at all, so we have to * kluge things here instead. * * We approximate never vacuumed by has relpages = 0, which * means this will also fire on genuinely empty relations.Not * great, but fortunately that's a seldom-seen case in the real * world, and it shouldn't degrade the quality of the plan too * much anyway to err in this direction. */ if (curpages 10 rel-rd_rel-relpages == 0) curpages = 10; That comment is of 8.0 vintage, and it needs to be updated, because it's now the case that there *is* an automatic path for refreshing plans when table sizes change. Once the number of updates exceeds the auto-analyze threshold, autovac will run an ANALYZE, which will update the relation's pg_class row, which will force a relcache inval, which will cause the plancache.c code to mark any cached plans using the relation as needing to be rebuilt. So that raises the question of whether we shouldn't just drop the if-statement entirely. I experimented with that a bit, and soon found that it resulted in some probably-undesirable changes in the regression test results. In particular the planner seemed to be switching from indexscan to seqscan plans for accesses to very small tables, which may not be a good tradeoff. I'm a bit loath to twiddle the behavior here without extensive testing, since for the most part we've not had many complaints about the planner's behavior for small tables. Another reason not to rely completely on the auto-analyze update path is that it doesn't work for temp tables, since autovac can't access another session's temp tables. It's also worth noting that auto-analyze will never kick in at all for tables of less than about 60 rows, unless there is update/delete traffic on them. The issue that came up in pgsql-performance was that this if-statement was firing for an empty inheritance parent table, causing a scan on the parent to look significantly more expensive than it really was, and in fact a good bit more expensive than the actually-useful index probe on its child table. This bogus estimate thus discouraged the planner from using a nestloop with the partitioned table on the inside, which in reality was the most appropriate plan. So we could tackle that issue with a pretty narrowly-focused test: disable the if-statement for inheritance parent tables, a la if (curpages 10 rel-rd_rel-relpages == 0 !rel-rd_rel-relhassubclass) curpages = 10; This is justifiable on the grounds that an inheritance parent table is much more likely to be meant to stay empty than an ordinary table. Another thing that struck me while looking at the code is that the curpages clamp is applied to indexes too, which seems like a thinko. A table occupying a few pages wouldn't likely have an index as big as the table itself is. So what I'm currently thinking about is a change like this: if (curpages 10 rel-rd_rel-relpages == 0 !rel-rd_rel-relhassubclass rel-rd_rel-relkind != RELKIND_INDEX) curpages = 10; plus a suitable rewrite of the comment. This seems like a safe enough change to apply to 9.1. Going forward we might want to change it more, but I think it'd require some real-world testing. Thoughts? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] spinlock contention
On Jul13, 2011, at 00:10 , Robert Haas wrote: On Jul 12, 2011, at 8:03 AM, Florian Pflug f...@phlo.org wrote: The algorithm is quite straight forward, if one assumes a lock-free implementation of a queue (More on that below) This is similar to the CAS-based LWLocks I played around with, though I didn't use a lock-free queue. I think I probably need to devote some time to figuring out why that didn't work out to a win... Yeah, the non-waitqueue-related parts are mostly identical. The only significant difference there is that the waiters-present bit is replaced by fence-and-recheck. What motivated me to research this was your theory that the problem is not so much spin-lock contention, but rather those unlucky processes who lose their time-slice while holding the lock. If that is true, then maybe the problem with your CAS patch is that once the LWLocks is contested heavily, the waiters-present bit will be set pretty much all the time, and the code will thus fall back to using the spin-lock. *Reading the code again* Hm, I just realized that you only clear the waiters-present bit after emptying the queue completely. With rising contention, you might reach a point where you never empty the queue completely (unless the lock is only ever acquired in share mode, that is). As it stands, the CAS patch is effectively neutralized at this point. It'll even have an adverse effect due to the higher number of atomic operations compared to the unpatched version. I wonder if clearing the waiters-present bit only upon clearing the queue completely is necessary for correctness. Wouldn't it be OK to clear the bit after waking up at least one locker? That'd still guarantee that you cannot end up with a blocked process but no lock holder, I believe. best regards Florian Pflug -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
On Jul 12, 2011, at 5:07 PM, Tom Lane wrote: Curious considering that the machine is there (responds to ping), and the ossp.org webserver works fine. Has anyone bugged the owner about that? I've sent him email and Twitter DMs, to no avail. Best, David -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Full GUID support
On Jul 12, 2011, at 5:06 PM, Josh Berkus wrote: I'm okay with this, though given the fact that ftp.ossp.org has been down for *months*, I'm inclined to think that we ought to include it in the contrib distribution for easy linking. What license is it under? COPYRIGHT AND LICENSE Copyright (c) 2004-2008 Ralf S. Engelschall r...@engelschall.com Copyright (c) 2004-2008 The OSSP Project http://www.ossp.org/ This file is part of OSSP uuid, a library for the generation of UUIDs which can found at http://www.ossp.org/pkg/lib/uuid/ Permission to use, copy, modify, and distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS AND COPYRIGHT HOLDERS AND THEIR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [COMMITTERS] pgsql: Blind attempt at fixing isolation_tester on Win32
Excerpts from Alvaro Herrera's message of mié jul 13 01:11:41 -0400 2011: Blind attempt at fixing isolation_tester on Win32 If this doesn't work, I'm afraid I'll have to ask some Windows person for help :-) -- Álvaro Herrera alvhe...@commandprompt.com The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] FOR KEY LOCK foreign keys
On Tue, Jul 12, 2011 at 05:59:01PM -0400, Alvaro Herrera wrote: Excerpts from Noah Misch's message of vie mar 11 12:51:14 -0300 2011: On Fri, Feb 11, 2011 at 02:13:22AM -0500, Noah Misch wrote: Automated tests would go a long way toward building confidence that this patch does the right thing. Thanks to the SSI patch, we now have an in-tree test framework for testing interleaved transactions. The only thing it needs to be suitable for this work is a way to handle blocked commands. If you like, I can try to whip something up for that. [off-list ACK followed] Here's a patch implementing that. It applies to master, with or without your KEY LOCK patch also applied, though the expected outputs reflect the improvements from your patch. I add three isolation test specs: fk-contention: blocking-only test case from your blog post fk-deadlock: the deadlocking test case I used during patch review fk-deadlock2: Joel Jacobson's deadlocking test case Thanks for this patch. I have applied it, adjusting the expected output of these tests to the HEAD code. I'll adjust it when I commit the fklocks patch, I guess, but it seemed simpler to have it out of the way; besides it might end up benefitting other people who might be messing with the locking code. Great. There have been a few recent patches where I would have used this functionality to provide tests, so I'm glad to have it in. I think this will work on Windows as well as pgbench does, but I haven't verified that. We will find out shortly. I see you've added a fix for the MSVC animals; thanks. coypu failed during the run of the test due to a different session being chosen as the deadlock victim. We can now vary deadlock_timeout to prevent this; see attached fklocks-tests-deadlock_timeout.patch. This also makes the tests much faster on a default postgresql.conf. crake failed when it reported waiting on the first step of an existing isolation test (two-ids.spec). I will need to look into that further. Thanks, nm diff --git a/src/test/isolation/expected/fk-deadlock.out b/src/test/isolation/expected/fk-deadlock.out index 6b6ee16..0d86cda 100644 *** a/src/test/isolation/expected/fk-deadlock.out --- b/src/test/isolation/expected/fk-deadlock.out *** *** 32,39 step s1i: INSERT INTO child VALUES (1, 1); step s2i: INSERT INTO child VALUES (2, 1); step s2u: UPDATE parent SET aux = 'baz'; waiting ... step s1u: UPDATE parent SET aux = 'bar'; - step s2u: ... completed ERROR: deadlock detected step s2c: COMMIT; step s1c: COMMIT; --- 32,39 step s2i: INSERT INTO child VALUES (2, 1); step s2u: UPDATE parent SET aux = 'baz'; waiting ... step s1u: UPDATE parent SET aux = 'bar'; ERROR: deadlock detected + step s2u: ... completed step s2c: COMMIT; step s1c: COMMIT; *** *** 52,59 step s2i: INSERT INTO child VALUES (2, 1); step s1i: INSERT INTO child VALUES (1, 1); step s2u: UPDATE parent SET aux = 'baz'; waiting ... step s1u: UPDATE parent SET aux = 'bar'; - step s2u: ... completed ERROR: deadlock detected step s2c: COMMIT; step s1c: COMMIT; --- 52,59 step s1i: INSERT INTO child VALUES (1, 1); step s2u: UPDATE parent SET aux = 'baz'; waiting ... step s1u: UPDATE parent SET aux = 'bar'; ERROR: deadlock detected + step s2u: ... completed step s2c: COMMIT; step s1c: COMMIT; diff --git a/src/test/isolation/expected/fk-deadloindex af3ce8e..6e7f12d 100644 *** a/src/test/isolation/expected/fk-deadlock2.out --- b/src/test/isolation/expected/fk-deadlock2.out *** *** 42,49 step s1u1: UPDATE A SET Col1 = 1 WHERE AID = 1; step s2u1: UPDATE B SET Col2 = 1 WHERE BID = 2; step s2u2: UPDATE B SET Col2 = 1 WHERE BID = 2; waiting ... step s1u2: UPDATE B SET Col2 = 1 WHERE BID = 2; - step s2u2: ... completed ERROR: deadlock detected step s1c: COMMIT; step s2c: COMMIT; --- 42,49 step s2u1: UPDATE B SET Col2 = 1 WHERE BID = 2; step s2u2: UPDATE B SET Col2 = 1 WHERE BID = 2; waiting ... step s1u2: UPDATE B SET Col2 = 1 WHERE BID = 2; ERROR: deadlock detected + step s2u2: ... completed step s1c: COMMIT; step s2c: COMMIT; *** *** 52,59 step s1u1: UPDATE A SET Col1 = 1 WHERE AID = 1; step s2u1: UPDATE B SET Col2 = 1 WHERE BID = 2; step s2u2: UPDATE B SET Col2 = 1 WHERE BID = 2; waiting ... step s1u2: UPDATE B SET Col2 = 1 WHERE BID = 2; - step s2u2: ... completed ERROR: deadlock detected step s2c: COMMIT; step s1c: COMMIT; --- 52,59 step s2u1: UPDATE B SET Col2 = 1 WHERE BID = 2; step s2u2: UPDATE B SET Col2 = 1 WHERE BID = 2; waiting ... step s1u2: UPDATE B SET Col2 = 1 WHERE BID = 2; ERROR: deadlock detected + step s2u2: ... completed step s2c: COMMIT; step s1c: COMMIT; *** *** 82,89 step s2u1: UPDATE B SET
Re: [HACKERS] dropping table in testcase alter_table.sql
On Wed, Jul 13, 2011 at 2:53 AM, Peter Eisentraut pete...@gmx.net wrote: On tis, 2011-07-12 at 08:51 -0400, Tom Lane wrote: Peter Eisentraut pete...@gmx.net writes: It has occurred to me a few times that it could be useful to clarify the approach here. If we could somehow have a separable cleanup step for every test, and eliminate interdependencies between tests, we could more easily support a number of uses cases such as creating a completely populated regression test database for playing, or running tests in random order or in differently parallelized scenarios. The limiting case of this is that each regression test script would be expected to start in an empty database and leave the DB empty on exit. I think that would make the tests less useful, not more, for several reasons: 1. They'd be slower, since every test would have to start by creating and populating some tables. 2. The final state of the regression database would no longer be useful as an environment for running ad-hoc manual tests. 3. The final state of the regression database would no longer be useful as a test case for pg_dump and pg_upgrade. If the tests are leaving behind the objects unintentionally, we can not be sure whether the state of the objects before upgrade/dump (or for that matter anything else) is intentional. If one needs to test upgrade and dump truly, the state of objects in the database, just before upgrading/dumping, needs to be arrived in a controlled manner. IOW, if a test wants to leave behind objects in certain state for some further testing, it should be intentional. May be those objects should be annotated so (say, in the comments?). All the other objects be better cleaned up. Said that, these particular two tables have very common names tab1 and tab2, which someone can pick up easily, thus linking two testcases unintentionally. So, at least we can make sure that if we use such common names for the objects, we clean them up at the end of test. If some object needs to be left behind we can give it a special name (say, the name includes the test case name, like alter_tab_tab1), so that there is lesser chance of interference with later tests. In case of #2 and #3 it also serves the purpose 1. Identifying the testcase which created/manipulated these objects last 2. We can trace the things that affected this object, before it came to a certain state. This can be useful information in debugging problems. I think you misunderstood what I was saying. I wanted take out the cleanup parts out of all test cases and make it a choice whether to run them. Right now we have a lot of test cases that clean up after themselves, which is useful in some cases (testing the cleaning, for one thing), but not useful for 2. and 3. -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company