Re: [HACKERS] PL/pgSQL Todo, better information in errcontext from plpgsql
On Oct 5, 2006, at 9:30 AM, Pavel Stehule wrote: With func oid I can get all other info later, without it, I need estimate which functions are in stack track. Why do you need the OID to know exactly what function something is? What's wrong with schema.function(args)? -- oid is simply unique. I can take source code, args and all without parsing. It's only one difference. I unlike parsing. I don't think so regress tests needs showing errcontext now. Regress test are in C and we can exactly set what we would show? Regards Pavel Stehule _ Emotikony a pozadi programu MSN Messenger ozivi vasi konverzaci. http://messenger.msn.cz/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] timestamptz alias
Jim Nasby wrote: It would be nice to denote types/aliases that are and aren't ANSI. A number are marked in the docs, but it would be good to add the info to that summary table. Right under the table this sentence appears: Compatibility: The following types (or spellings thereof) are specified by SQL: bit, bit varying, boolean, char, character varying, character, varchar, date, double precision, integer, interval, numeric, decimal, real, smallint, time (with or without time zone), timestamp (with or without time zone). What's unclear about that? cheers andrew ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Storing MemoryContext Pointers
On Oct 05 03:34, Tom Lane wrote: Volkan YAZICI [EMAIL PROTECTED] writes: When I allocate a new memory context via oldmcxt = AllocSetContextCreate(TopMemoryContext, ...) persistent_mcxt = CurrentMemoryContext; ITYM persistent_mcxt = AllocSetContextCreate(TopMemoryContext, ...) Opps! Right. (I was looking at some MemoryContextSwitchTo() code while typing above lines.) because the other doesn't do what you think... How can I store the persistent_mcxt in a persistent place that I'll be able to reach it in my next getting invoked? Make it a static variable. I had thought some kind of fcinfo-flinfo-fn_extra trick but... a static variable is fair enough too. Thanks so much for the answer (also for your reply to caching subjected post). Regards. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] continuing daily testing of dbt2 against postgresql
[EMAIL PROTECTED] wrote: Mark Wong [EMAIL PROTECTED] writes: After over a year of problems (old site http://developer.osdl.org/markw/postgrescvs/) I have resumed producing daily results of dbt-2 against PostgreSQL CVS code with results here: http://dbt.osdl.org/dbt2.html This is good to hear! I am curious where we are now compared to where we were a year ago ... do you still have the old data, and is the test setup still comparable? The test setup is on completely different hardware. I still have the old data and it's accessible, but it'll take a little bit of work to regenerate the links. I'll try to work on that. I think it would also help if you would create reference runs for the latest 8.0 and 8.1 releases on the new hardware. Best Regards Michael Paesold ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] pg_dump exclusion switches and functions/types
Hi, Tom, Tom Lane wrote: One issue is what to do with procedural languages and large objects, which don't have any associated schema. If we treat them as being outside all schemas, we'd have semantics like this: dump the PLs and blobs unless one or more --schema switches appeared. Is that OK? Sounds fine. Is there a possibility to dump only those objects? Maybe --large-objects and --languages? Thanks, Markus -- Markus Schaber | Logical TrackingTracing International AG Dipl. Inf. | Software Development GIS Fight against software patents in Europe! www.ffii.org www.nosoftwarepatents.org signature.asc Description: OpenPGP digital signature
Re: [HACKERS] Win XP SP2 SMP locking (8.1.4)
On Thu, 5 Oct 2006, Magnus Hagander wrote: Hi there, I'm looking into strange locking, which happens on WinXP SP2 SMP machine running 8.1.4 with stats_row_level=on. This is the only combination (# of cpu and stats_row_level) which has problem - SMP + stats_row_level. The same test runs fine with one cpu (restarted machine with /numproc=1) disregarding to stats_row_level option. Customer's application loads data into database and sometimes process stopped, no cpu, no io activity. PgAdmin shows current query is 'COMMIT'. I tried to attach gdb to postgres and client processes, but backtrace looks useless (see below). Running vacuum analyze of this database in separate process cause loading process to continue ! Weird. It's interesting, that there is no problem with 8.2beta1 in all combinations ! Any idea what changes from 8.1.4 to 8.2beta1 could affect the problem ? There is a new implementations of semaphores in 8.2. That could possibly be it. I backported them to REL8_1_STABLE but it doesn't helped. Any other idea what to do, or how to debug the situation ? Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Win XP SP2 SMP locking (8.1.4)
I'm looking into strange locking, which happens on WinXP SP2 SMP machine running 8.1.4 with stats_row_level=on. This is the only combination (# of cpu and stats_row_level) which has problem - SMP + stats_row_level. The same test runs fine with one cpu (restarted machine with /numproc=1) disregarding to stats_row_level option. Customer's application loads data into database and sometimes process stopped, no cpu, no io activity. PgAdmin shows current query is 'COMMIT'. I tried to attach gdb to postgres and client processes, but backtrace looks useless (see below). Running vacuum analyze of this database in separate process cause loading process to continue ! Weird. It's interesting, that there is no problem with 8.2beta1 in all combinations ! Any idea what changes from 8.1.4 to 8.2beta1 could affect the problem ? There is a new implementations of semaphores in 8.2. That could possibly be it. I backported them to REL8_1_STABLE but it doesn't helped. Any other idea what to do, or how to debug the situation ? Unfortunatly, the debugger support for mingw is absolutely horrible. But you can try process explorer from www.sysinternals.com and see if it'll give you a decent backtrace. Sometimes it works when others don't. Either that, or try the Visual Studio or Windows debuggers, they can usually at least show you if it's stuck waiting on something in the kernel. //Magnus ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] pg_dump exclusion switches and functions/types
Tom Lane wrote: Kris Jurka [EMAIL PROTECTED] writes: Testing out the new pg_dump exclusion switches I've found that excluding a table means that no functions or types will be dumped. Excluding one table shouldn't exclude these objects. I tend to agree ... will see if I can make it happen. (I never did get around to reviewing that patch, anyway ...) One issue is what to do with procedural languages and large objects, which don't have any associated schema. If we treat them as being outside all schemas, we'd have semantics like this: dump the PLs and blobs unless one or more --schema switches appeared. Is that OK? Is there a reason why pg_dump can't do the --list/--use-list flags like pg_restore, or is it just a matter of round tuits? -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] pg_dump exclusion switches and functions/types
Richard Huxton wrote: Is there a reason why pg_dump can't do the --list/--use-list flags like pg_restore, or is it just a matter of round tuits? The major reason for having those features as I understand it was to help overcome dependency difficulties in dumps, which are now largely a thing of the past. However, ISTM that a similar facility for fine grained control could fairly easily be built into pg_dump. cheers andrew ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Upgrading a database dump/restore
Martijn van Oosterhout napsal(a): On Thu, Oct 05, 2006 at 04:39:22PM -0400, Mark Woodward wrote: Indeed. The main issue for me is that the dumping and replication setups require at least 2x the space of one db. That's 2x the hardware which equals 2x $$$. If there were some tool which modified the storage while postgres is down, that would save lots of people lots of money. Its time and money. Stoping a database and staring with new software is a lot faster than dumping the data out (disallowing updates or inserts) and restoring the data can take hours or days *and* twice the hardware. In that case there should be people willing to fund the development. There have been a few people (even in the last few weeks) who say they're looking into it, perhaps they need a helping hand? There are still people who are working on it :-). I'm working on catalog conversion prototype - it will generate helping request early ;-). Zdenek ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Storing MemoryContext Pointers
Volkan YAZICI [EMAIL PROTECTED] writes: On Oct 05 03:34, Tom Lane wrote: Volkan YAZICI [EMAIL PROTECTED] writes: How can I store the persistent_mcxt in a persistent place that I'll be able to reach it in my next getting invoked? Make it a static variable. I had thought some kind of fcinfo-flinfo-fn_extra trick but... a static variable is fair enough too. No, a fn_extra pointer would go away whenever the calling query ends, and you'd have leaked the context permanently. Children of TopMemoryContext will never go away unless explicitly destroyed, so a static variable has the right lifespan to remember them. If you did want something that goes away at end of query, you could probably make it a child of the PortalContext. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] pg_dump exclusion switches and functions/types
Andrew Dunstan [EMAIL PROTECTED] writes: However, ISTM that a similar facility for fine grained control could fairly easily be built into pg_dump. Yeah ... later. The way I envision it is that the schema-related switches are fine for selecting things at the level of whole schemas, and the table-related switches are fine for selecting individual tables, and what we lack are inclusion/exclusion switches that operate on other kinds of individual objects. Somebody can design and implement those later, if the itch strikes. What we have to do today is make sure that the interaction of schema and table switches is such that an extension in that direction will fit in naturally. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] PL/pgSQL Todo, better information in errcontext from plpgsql
Pavel Stehule [EMAIL PROTECTED] writes: Why do you need the OID to know exactly what function something is? What's wrong with schema.function(args)? oid is simply unique. I can take source code, args and all without parsing. It's only one difference. I unlike parsing. That isn't an adequate reason for pushing an implementation detail into the user's face. IMHO no error message seen by ordinary users should mention OIDs at all. A thought experiment: what would you do with the message if we were to reimplement the system to not use OIDs? I'm willing to talk about putting a complete specification of the function (with schema and arg types) into the context line, but you haven't really made the case why that wouldn't be just clutter for most people. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] pg_dump exclusion switches and functions/types
Tom Lane wrote: Andrew Dunstan [EMAIL PROTECTED] writes: However, ISTM that a similar facility for fine grained control could fairly easily be built into pg_dump. Yeah ... later. The way I envision it is that the schema-related switches are fine for selecting things at the level of whole schemas, and the table-related switches are fine for selecting individual tables, and what we lack are inclusion/exclusion switches that operate on other kinds of individual objects. Somebody can design and implement those later, if the itch strikes. What we have to do today is make sure that the interaction of schema and table switches is such that an extension in that direction will fit in naturally. totally agree. cheers andrew ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[HACKERS] unsubscribe
unsubscribe -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Markus Schaber Sent: Friday, October 06, 2006 3:34 AM To: pgsql-hackers@postgresql.org Subject: Re: [HACKERS] pg_dump exclusion switches and functions/types Hi, Tom, Tom Lane wrote: One issue is what to do with procedural languages and large objects, which don't have any associated schema. If we treat them as being outside all schemas, we'd have semantics like this: dump the PLs and blobs unless one or more --schema switches appeared. Is that OK? Sounds fine. Is there a possibility to dump only those objects? Maybe --large-objects and --languages? Thanks, Markus -- Markus Schaber | Logical TrackingTracing International AG Dipl. Inf. | Software Development GIS Fight against software patents in Europe! www.ffii.org www.nosoftwarepatents.org ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] timestamp subtraction (was Re: [SQL] formatting intervals
Great, it's nice to see that this might get rolled into one of the next releases. Thanks, Graham. Tom Lane wrote: Michael Glaesemann [EMAIL PROTECTED] writes: Considering how late it is in the cycle, perhaps the change in behavior should come in 8.3. Yeah, there's not really enough time to think through the consequences now. I'd like to experiment with it for 8.3 though. regards, tom lane -- Graham Davis Refractions Research Inc. [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] PL/pgSQL Todo, better information in errcontext from plpgsql
On Oct 6, 2006, at 1:47 AM, Pavel Stehule wrote: On Oct 5, 2006, at 9:30 AM, Pavel Stehule wrote: With func oid I can get all other info later, without it, I need estimate which functions are in stack track. Why do you need the OID to know exactly what function something is? What's wrong with schema.function(args)? -- oid is simply unique. I can take source code, args and all without parsing. It's only one difference. I unlike parsing. decibel=# select 'pg_catalog.abstimelt (abstime,abstime)'::regprocedure::oid; oid - 253 -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] pg_dump exclusion switches and functions/types
Kris Jurka [EMAIL PROTECTED] writes: Testing out the new pg_dump exclusion switches I've found that excluding a table means that no functions or types will be dumped. Excluding one table shouldn't exclude these objects. I've been chewing on this a bit and find that the existing patch has several behaviors that seem surprising. Considering just one type of inclusion/exclusion switches at a time (we'll get to the interaction of schema and table switches below), I think we can all agree without too much argument on these statements: * With no inclusion/exclusion switches, all objects except system objects should be dumped. * With only exclusion switches given, all objects except system objects and those matching at least one pattern should be dumped. * With only inclusion switches given, only those objects matching at least one pattern should be dumped (whether they are system objects or not). That last proviso might be debatable but on balance I think it's OK (for instance, pg_dump --schema=information_schema could be useful for debugging or documentation purposes). Where things get interesting is when you have both inclusion and exclusion switches given. The existing patch's behavior is that the rightmost switch wins, ie, if an object's name matches more than one pattern then it is included or excluded according to the rightmost switch it matches. This is, erm, poorly documented, but it seems like useful behavior so I don't have an objection myself. The real question is what should happen to objects that don't match any of the switch patterns? (This is relevant to Kris' complaint because non-table objects should be treated the same as tables that don't match any table name inclusion/exclusion switches.) What I find in the existing code is that if an inclusion switch appears first: pg_dump -n 's.*' -N 'ss.*' ... then only schemas matching an inclusion switch (and not matching any later exclusion switch) are dumped. While if an exclusion switch appears first: pg_dump -N 's.*' -n 'ss.*' ... then all schemas are dumped except system schemas and those matching an exclusion switch (and not matching any later inclusion switch). So the default behavior for unmatched objects flips depending on switch order. This doesn't seem to satisfy the principle of least surprise, and it's certainly not adequately documented. It might be the most useful behavior though. I thought about the alternative rule that if any inclusion switches appear at all, the default is not to dump --- that is, an object must match at least one inclusion switch (and not match any later exclusion switch) to be dumped. But with that rule, exclusion switches before the first inclusion switch are actually useless. Has anyone got a better idea? Returning to the point about schema versus table selection switches, what we've got is that they are independent filters: to be dumped, a table must be in a schema selected by the schema inclusion/exclusion switches (if any), and it must have a name selected by the table inclusion/exclusion switches (if any). I think this is OK but it leads to the property that the order of -n/-N switches is relevant, and the order of -t/-T switches is relevant, but their order relative to each other is not relevant. This could be surprising. If you're still with me, the payoff is here: what are the rules for dumping non-table objects, given that there are no inclusion/exclusion switches for them (but we might want to add such later)? If only schema inclusion/exclusion switches are present, then it's relatively easy to say dump objects that are in selected schemas --- but what about objects that don't have a schema, such as PLs? And what about the case where table inclusion/exclusion switches are present? I said above that non-table objects should be treated the same way as unmatched tables, which I think is a necessary rule if we want to extend the set of switch types later. But that leads to the conclusion that a non-table object is dumped unless a -t switch appears before any -T switches. Which strikes me as a mighty surprising behavior. I'm not sure what to do differently though. Lastly, as long as we're questioning the premises of this patch, I wonder about the choice to use regex pattern matching rules. The problem with regex is that to be upward-compatible with the old exact-match switch definitions, a switch value that doesn't contain any regex special characters is treated as an equality condition not a pattern, which makes for a discontinuity. For instance, -t x is treated like -t '^x$' while -t 'x.*y' doesn't get the anchors added. That's going to burn people. An alternative we could consider is to use LIKE patterns instead, but since underscore is a wildcard in LIKE, it's easy to imagine people getting burnt by that too. Or we could import the rather ad-hoc shell-wildcard-like rules used by psql's \d stuff. None of these are especially attractive :-(
Re: [HACKERS] Win XP SP2 SMP locking (8.1.4)
Didn't the stats communication process get redone for 8.2? Or atleast some time-out related stuff. Since the problem seems to be related to stats_row_level being on, I wonder if the problem might be in that sub-system. I am guessing that vacuum is pushing some more stats through, which might explain how that allows it to unfreeze. -rocco -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Magnus Hagander Sent: Friday, October 06, 2006 6:00 AM To: Oleg Bartunov Cc: Pgsql Hackers Subject: Re: [HACKERS] Win XP SP2 SMP locking (8.1.4) I'm looking into strange locking, which happens on WinXP SP2 SMP machine running 8.1.4 with stats_row_level=on. This is the only combination (# of cpu and stats_row_level) which has problem - SMP + stats_row_level. The same test runs fine with one cpu (restarted machine with /numproc=1) disregarding to stats_row_level option. Customer's application loads data into database and sometimes process stopped, no cpu, no io activity. PgAdmin shows current query is 'COMMIT'. I tried to attach gdb to postgres and client processes, but backtrace looks useless (see below). Running vacuum analyze of this database in separate process cause loading process to continue ! Weird. It's interesting, that there is no problem with 8.2beta1 in all combinations ! Any idea what changes from 8.1.4 to 8.2beta1 could affect the problem ? There is a new implementations of semaphores in 8.2. That could possibly be it. I backported them to REL8_1_STABLE but it doesn't helped. Any other idea what to do, or how to debug the situation ? Unfortunatly, the debugger support for mingw is absolutely horrible. But you can try process explorer from www.sysinternals.com and see if it'll give you a decent backtrace. Sometimes it works when others don't. Either that, or try the Visual Studio or Windows debuggers, they can usually at least show you if it's stuck waiting on something in the kernel. //Magnus ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] pg_dump exclusion switches and functions/types
[Snip explanations] Comments? Would it be reasonable to include one more switch: 'include dependencies' ? That would work like this: * first consider all to be included objects (possibly limited by the include switches); * if dependencies are included, add all dependent objects, plus non-schema objects (which arguably can be considered as dependencies for the whole data base); * remove all objects targeted by exclude switches; This way you won't have any dependency on the ordering, and you could consider all non-schema objects as dependencies, so they will only be included if dependencies are to be included. Excluding dependencies would be the default if any switches are specified, including otherwise (not sure how much is this of the principle of least surprise, but it would be backwards compatible). The scenario I most care about is to be able to make a complete data base dump (including non-schema objects) while excluding only a few tables. If I understood your explanations correctly, this would not be easily possible with the current implementation. Note that I have a patch (kludge ?) on the 8.1 pg_dump which does exactly this, it would be a pity if I would need to patch the 8.2 one again to do that... Cheers, Csaba. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] pg_dump exclusion switches and functions/types
Tom Lane wrote: Lastly, as long as we're questioning the premises of this patch, I wonder about the choice to use regex pattern matching rules. The problem with regex is that to be upward-compatible with the old exact-match switch definitions, a switch value that doesn't contain any regex special characters is treated as an equality condition not a pattern, which makes for a discontinuity. For instance, -t x is treated like -t '^x$' while -t 'x.*y' doesn't get the anchors added. That's going to burn people. An alternative we could consider is to use LIKE patterns instead, but since underscore is a wildcard in LIKE, it's easy to imagine people getting burnt by that too. Or we could import the rather ad-hoc shell-wildcard-like rules used by psql's \d stuff. None of these are especially attractive :-( Comments? 1. regexes, please. 2. I'd rather remove backwards compatibility than have the discontinuity. After all, users can anchor the expression pretty easily. If not, then let's use an alternate switch for the regexes, (I know we are running out of option space). cheers andrew ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] pg_dump exclusion switches and functions/types
Csaba Nagy [EMAIL PROTECTED] writes: Would it be reasonable to include one more switch: 'include dependencies' ? We are two months past feature freeze ... adding entirely new features to pg_dump is *not* on the table for 8.2. What we need to do at the moment is make sure that the features we've got work sanely and won't create headaches for likely future extensions; but not actually implement those extensions. The scenario I most care about is to be able to make a complete data base dump (including non-schema objects) while excluding only a few tables. Isn't this the same as Kris' complaint? Why do you need additional dependency analysis to do the above? regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] pg_dump exclusion switches and functions/types
Andrew Dunstan [EMAIL PROTECTED] writes: Tom Lane wrote: ... Or we could import the rather ad-hoc shell-wildcard-like rules used by psql's \d stuff. None of these are especially attractive :-( 1. regexes, please. One argument that occurs to me for importing the psql code is that it's solved the problem of including a schema name in the pattern. It would be a lot nicer to say -t schema.table than to have to say -t table -n schema. In particular this allows one to dump s1.foo and s2.bar without also getting s1.bar and s2.foo; a problem that is insoluble if we have only independent schema and table filters. I think that ideally one would only use the schema switches if one actually wanted a schema-by-schema dump, not as a wart on the side of the specific-object-selection switches. The psql code does allow you to get at most of the functionality of regexes... regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[HACKERS] Backbranch releases
Agreed we need to push out the back branch releases. Let me know what I can do to help. -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] SQL:2003 Statistical functions - What are they?
On Wed, Oct 04, 2006 at 08:46:14 -0400, Obe, Regina [EMAIL PROTECTED] wrote: I'm not sure this is the right group to ask this. I see that the 8.2 notes say all SQL:2003 statistical functions are implemented in 8.2, but I couldn't find a listing for those anywhere I looked. For those who are clueless like me, can someone provide a listing of what these functions are? They are grouped under aggregate functions. See: http://developer.postgresql.org/pgdocs/postgres/functions-aggregate.html#FUNCTIONS-AGGREGATE-STATISTICS-TABLE ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Backbranch releases
Bruce Momjian [EMAIL PROTECTED] writes: Agreed we need to push out the back branch releases. Let me know what I can do to help. Making up the release notes is the only large bit of work ... do you want to do that? FYI to the rest of you: we're planning back-branch releases before 8.2 final, since 8.1 in particular is well overdue for an update. Current proposal is to wrap tarballs on Thursday 10/12 for public release Monday 10/16. If you've got any back-branch patches sitting around, send 'em in now ... regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Backbranch releases
Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: Agreed we need to push out the back branch releases. Let me know what I can do to help. Making up the release notes is the only large bit of work ... do you want to do that? Sure, and stamping. How far back do you want to go? FYI to the rest of you: we're planning back-branch releases before 8.2 final, since 8.1 in particular is well overdue for an update. Current proposal is to wrap tarballs on Thursday 10/12 for public release Monday 10/16. If you've got any back-branch patches sitting around, send 'em in now ... OK. -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Backbranch releases
Bruce Momjian [EMAIL PROTECTED] writes: Sure, and stamping. How far back do you want to go? We might as well go back to 7.3 --- I saw Teodor back-patched some of his contrib/ltree fixes that far. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Backbranch releases
On Fri, 6 Oct 2006, Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: Agreed we need to push out the back branch releases. Let me know what I can do to help. Making up the release notes is the only large bit of work ... do you want to do that? FYI to the rest of you: we're planning back-branch releases before 8.2 final, since 8.1 in particular is well overdue for an update. Current proposal is to wrap tarballs on Thursday 10/12 for public release Monday 10/16. If you've got any back-branch patches sitting around, send 'em in now ... We, probably, have one patch for 8.1 stable branch which seems helped with locking on SMP Windows setup. I'm currently testing it and it looks good. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Backbranch releases
Oleg Bartunov oleg@sai.msu.su writes: We, probably, have one patch for 8.1 stable branch which seems helped with locking on SMP Windows setup. I'm currently testing it and it looks good. Cool, what's the patch? regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] 8.2 translation status?
I saw you did a round of message-style policing today. Are we ready to declare a string freeze, and/or call for translation work? Given the relatively quiet situation for bug reports, I'm thinking that the message translation work may be the critical path for getting 8.2 out. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Backbranch releases
On Fri, 6 Oct 2006, Tom Lane wrote: Oleg Bartunov oleg@sai.msu.su writes: We, probably, have one patch for 8.1 stable branch which seems helped with locking on SMP Windows setup. I'm currently testing it and it looks good. Cool, what's the patch? Unfortunately, after several hours of testing I just got the same locking. I'll continue testing with Teodor and Magnus. It's real life scenario, so I think it should be fixed for 8.1 stable. Also, the problem exists in 8.2. Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Should there be a charcat?
Column | Type | Modifiers +---+--- c | character(20) | decibel=# select c, c || ('x'::char) from c; c | ?column? --+-- x| xx I would think that c || 'x' would result in 'xx', but it doesn't because the only text concatenation function we have is textcat(text, text). As soon as you cast a char to text, you lose all the trailing spaces. Is this what the spec says should happen? -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Should there be a charcat?
Jim C. Nasby [EMAIL PROTECTED] writes: I would think that c || 'x' would result in 'xx', but it doesn't We did it that way up through 7.3, but changed because we concluded the behavior was inconsistent. The behavior of char(N) these days is that the padding spaces are not semantically significant, which means that they effectively are not there for anything except display purposes. If we didn't do this, we would have situations where, for example, A = B but (A || 'x') != (B || 'x'). Does that strike you as a good idea? If you're interested you can probably find more discussion in the archives from mid-2003 or so. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[HACKERS] array_accum aggregate
Greetings, The array_accum example aggregate in the user documentation works reasonably on small data sets but doesn't work too hot on large ones. http://www.postgresql.org/docs/8.1/static/xaggr.html Normally I wouldn't care particularly much but it turns out that PL/R uses arrays for quite a bit (eg: histograms and other statistics functions). I've also heard other complaints about the performance of arrays, though I'm not sure if those were due to array_accum or something else. Long story short, I set out to build a faster array_accum. Much to my suprise and delight, we already *had* one. accumArrayResult() and makeArrayResult()/construct_md_array() appear to do a fantastic job. I've created a couple of 'glue' functions to expose these functions so they can be used in an aggregate. I'm sure they could be improved upon and possibly made even smaller than they already are (90 lines total for both) but I'd like to throw out the idea of including them in core. The aggregate created with them could also be considered for inclusion though I'm less concerned with that. I don't expect general PostgreSQL users would have trouble creating the aggregate- I don't know that the average user would be able or willing to write the C functions. For comparison, the new functions run with: time psql -c select aaccum(generate_series) from generate_series(1,100); /dev/null 4.24s real 0.34s user 0.06s system Compared to: time psql -c select array_accum(generate_series) from generate_series(1,100); /dev/null ... Well, it's still running and it's been over an hour. The main differences, as I see it, are: accumArrayResult() works in chunks of 64 elements, and uses repalloc(). array_accum uses array_set() which works on individual elements and uses palloc()/memcpy(). I appriciate that this is done because for most cases of array_set() it's not acceptable to modify the input and am not suggesting that be changed. An alternative might be to modify array_set() to check if it is in an aggregate and change its behavior but adding the seperate functions seemed cleaner and much less intrusive to me. Please find the functions attached. Thanks, Stephen #include postgres.h #include fmgr.h #include utils/array.h #include nodes/execnodes.h PG_MODULE_MAGIC; PG_FUNCTION_INFO_V1(aaccum_sfunc); Datum aaccum_sfunc(PG_FUNCTION_ARGS) { int32 totlen; bytea *storage; Datum element; ArrayBuildState *astate = NULL; AggState *aggstate; /* Make sure we are in an aggregate. */ if (!fcinfo-context || !IsA(fcinfo-context, AggState)) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg(Can not call aaccum_sfunc as a non-aggregate))); aggstate = (AggState*) fcinfo-context; /* Initial call just passes in NULLs, so just allocate memory * and get set up. */ if (PG_ARGISNULL(0)) { storage = (bytea*) palloc(VARHDRSZ+sizeof(ArrayBuildState*)); storage-vl_len = VARHDRSZ+sizeof(ArrayBuildState*); astate = NULL; memcpy(storage-vl_dat,astate,sizeof(astate)); } else { storage = PG_GETARG_BYTEA_P(0); } memcpy(astate,storage-vl_dat,sizeof(astate)); element = PG_GETARG_DATUM(1); astate = accumArrayResult(astate, element, PG_ARGISNULL(1), get_fn_expr_argtype(fcinfo-flinfo, 1), aggstate-aggcontext); memcpy(storage-vl_dat,astate,sizeof(astate)); PG_RETURN_BYTEA_P(storage); } PG_FUNCTION_INFO_V1(aaccum_ffunc); Datum aaccum_ffunc(PG_FUNCTION_ARGS) { int dims[1]; int lbs[1]; bytea *storage; ArrayBuildState *astate; AggState *aggstate; ArrayType *result; /* Make sure we are in an aggregate. */ if (!fcinfo-context || !IsA(fcinfo-context, AggState)) ereport(ERROR, (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg(Can not call aaccum_sfunc as a non-aggregate))); aggstate = (AggState*) fcinfo-context; if (PG_ARGISNULL(0)) PG_RETURN_ARRAYTYPE_P(NULL); storage = (bytea*) PG_GETARG_BYTEA_P(0); memcpy(astate,storage-vl_dat,sizeof(astate)); dims[0] = astate-nelems; lbs[0] = 1; result = construct_md_array(astate-dvalues, astate-dnulls, 1, dims, lbs, astate-element_type, astate-typlen, astate-typbyval, astate-typalign); PG_RETURN_ARRAYTYPE_P(PointerGetDatum(result)); } ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] 8.2 translation status?
Tom Lane wrote: I saw you did a round of message-style policing today. Are we ready to declare a string freeze, and/or call for translation work? Yes, we are ready. Everyone was very disciplined this time around. :) I need to do some maintenance on the translations repository tomorrow, but we are pretty much ready to accept new translations now. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] pg_dump exclusion switches and functions/types
On Fri, Oct 06, 2006 at 11:54:51 -0400, Tom Lane [EMAIL PROTECTED] wrote: The problem with regex is that to be upward-compatible with the old exact-match switch definitions, a switch value that doesn't contain any regex special characters is treated as an equality condition not a pattern, which makes for a discontinuity. For instance, -t x is treated like -t '^x$' while -t 'x.*y' doesn't get the anchors added. That's going to burn people. An alternative we could consider is to use LIKE patterns instead, but since underscore is a wildcard in LIKE, it's easy to imagine people getting burnt by that too. Or we could import the rather ad-hoc shell-wildcard-like rules used by psql's \d stuff. None of these are especially attractive :-( Comments? How about making the regex's anchored by default? People who want unanchored ones can add .* at the beginning and/or end. Since only whether or not the pattern matches is important (not the string it matched), this keeps all of the same power, but matches the old behavior in simple cases. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] array_accum aggregate
Stephen Frost [EMAIL PROTECTED] writes: Long story short, I set out to build a faster array_accum. Much to my suprise and delight, we already *had* one. accumArrayResult() and makeArrayResult()/construct_md_array() appear to do a fantastic job. I've created a couple of 'glue' functions to expose these functions so they can be used in an aggregate. I'm sure they could be improved upon and possibly made even smaller than they already are (90 lines total for both) but I'd like to throw out the idea of including them in core. The aggregate created with them could also be considered for inclusion though I'm less concerned with that. Since you've set up the functions to only be usable inside an aggregate, I don't see much of a reason why we wouldn't provide the aggregate too. It looks like it should work to have just one polymorphic aggregate definition, eg, array_accum(anyelement) returns anyarray. As far as coding style goes, you're supposed to use makeArrayResult() with accumArrayResult(), not call construct_md_array() directly. And copying the ArrayBuildState around like that is just plain bizarre... Does the thing work without memory leakage (for a pass-by-ref datatype) in a GROUP BY situation? regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] 8.2 translation status?
Peter, Yes, we are ready. Everyone was very disciplined this time around. :) I need to do some maintenance on the translations repository tomorrow, but we are pretty much ready to accept new translations now. I'd really like to get the Japanese translations merged with the main distribution. I've asked JPUG about this, and haven't been able to get an answer on whether this is reasonable for 8.2. Do you have any idea? -- --Josh Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] array_accum aggregate
* Tom Lane ([EMAIL PROTECTED]) wrote: Stephen Frost [EMAIL PROTECTED] writes: Long story short, I set out to build a faster array_accum. Much to my suprise and delight, we already *had* one. accumArrayResult() and makeArrayResult()/construct_md_array() appear to do a fantastic job. I've created a couple of 'glue' functions to expose these functions so they can be used in an aggregate. I'm sure they could be improved upon and possibly made even smaller than they already are (90 lines total for both) but I'd like to throw out the idea of including them in core. The aggregate created with them could also be considered for inclusion though I'm less concerned with that. Since you've set up the functions to only be usable inside an aggregate, I don't see much of a reason why we wouldn't provide the aggregate too. Sure, I don't see a reason these functions would be useful outside of an aggregate in the form they need to be in for the aggregate, either.. It looks like it should work to have just one polymorphic aggregate definition, eg, array_accum(anyelement) returns anyarray. I was hoping to do that, but since it's an aggregate the ffunc format is pre-defined to require accepting the 'internal state' and nothing else, and to return 'anyelement' or 'anyarray' one of the inputs must be an 'anyelement' or 'anyarray', aiui. That also implied to me that the type passed in was expected to be the type passed out, which wouldn't necessairly be the case here as the internal state variable is a bytea. It might be possible to do away with the internal state variable entirely though and make it an anyelement instead, if we can find somewhere else to put the pointer to the ArrayBuildState. As far as coding style goes, you're supposed to use makeArrayResult() with accumArrayResult(), not call construct_md_array() directly. And copying the ArrayBuildState around like that is just plain bizarre... I had attempted to use makeArrayResult() originally and ran into some trouble with the MemoryContextDelete() which is done during it. The context used was the AggState context and therefore was deleted elsewhere. That might have been a misunderstanding on my part though since I recall fixing at least one or two bugs afterwards, so it may be possible to go back and change to using makeArrayResult(). That'd certainly make the ffunc smaller. As for ArrayBuildState, I'm not actually copying the structure around, just the pointer is being copied around inside of the state transistion variable (which is a bytea). I'm not sure where else to store the pointer to the ArrayBuildState though, since multiple could be in play at a given time during an aggregation, aiui. Does the thing work without memory leakage (for a pass-by-ref datatype) in a GROUP BY situation? I expected that using the AggState context would handle free'ing the memory at the end of the aggregation as I believe that's the context used for the state variable itself as well. Honestly, I wasn't entirely sure how to create my own context or if that was the correct thing to do in this case. I'll see about changing it around to do that if it's acceptable to have a context created for each instance of a group-by aggregate, and I can figure out how. :) I'm not sure about memory leakage during a Sort+GroupAgg.. I don't know how that's done currently either, honestly. It seems like memory could be free'd during that once the ffunc is called, and an extra memory context could do that, is that what you're referring to? Thanks! Stephen signature.asc Description: Digital signature
Re: [HACKERS] array_accum aggregate
* Stephen Frost ([EMAIL PROTECTED]) wrote: For comparison, the new functions run with: time psql -c select aaccum(generate_series) from generate_series(1,100); /dev/null 4.24s real 0.34s user 0.06s system Compared to: time psql -c select array_accum(generate_series) from generate_series(1,100); /dev/null ... Well, it's still running and it's been over an hour. Just to follow-up on this, the result was: 7601.36s real 0.36s user 0.02s system Or about 2 hours. Enjoy, Stephen signature.asc Description: Digital signature
Re: [HACKERS] pg_upgrade: downgradebility
Jonah, It's been two weeks, and I haven't seen anything, either here or on pg_foundry. Is this project derailed? -- --Josh Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] pg_dump exclusion switches and functions/types
Tom Lane [EMAIL PROTECTED] writes: The existing patch's behavior is that the rightmost switch wins, ie, if an object's name matches more than one pattern then it is included or excluded according to the rightmost switch it matches. This is, erm, poorly documented, but it seems like useful behavior so I don't have an objection myself. I don't know, it sounds like it's the source of the confusion you identify later. My first thought is that the rule should be to apply all the inclusion switches (implicitly including everything if there are none), then apply all the exclusion switches. That leads to including non-schema objects only if there are no schema inclusion switches. Which seems pretty logical since if you're explicitly including objects then you'll only expect objects explicitly included to be dumped and you'll quickly realize there's no switch to bring in those non-schema objects. Maybe there should be a switch to include them just for completeness. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] Added links to the release notes
I have added links from the 8.2 release notes into our documentation. If people have additions/changes, please let me know. -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] array_accum aggregate
Stephen Frost [EMAIL PROTECTED] writes: * Tom Lane ([EMAIL PROTECTED]) wrote: It looks like it should work to have just one polymorphic aggregate definition, eg, array_accum(anyelement) returns anyarray. I was hoping to do that, but since it's an aggregate the ffunc format is pre-defined to require accepting the 'internal state' and nothing else, and to return 'anyelement' or 'anyarray' one of the inputs must be an 'anyelement' or 'anyarray', aiui. Hmm ... I hadn't been thinking about what the state type would need to be, but certainly bytea is a lie given what you're really doing. We've run into this same problem in contrib/intagg: sometimes you'd like to use a state data structure that isn't any regular SQL datatype, and in particular isn't just a single blob of memory. That's a problem from nodeAgg's point of view because it expects to be responsible for copying the state value from one context to another. Don't have an immediate idea for a solution ... regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings