Re: [PATCHES] [HACKERS] WITH RECURSIVE updated to CVS TIP
hello david, i did some quick testing with this wonderful patch. it seems there are some flaws in there still: test=# explain select count(*) test-# from ( WITH RECURSIVE t(n) AS ( SELECT 1 UNION ALL SELECT DISTINCT n+1 FROM t ) test(# SELECT * FROM t WHERE n 50) as t test-# WHERE n 100; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. ! \q this one will kill the planner :( removing the (totally stupid) distinct avoids the core dump. i found one more issue; -- broken: wrong result test=# select count(*) from ( WITH RECURSIVE t(n) AS ( SELECT 1 UNION ALL SELECT n + 1 FROM t) SELECT * FROM t WHERE n 50) as t WHERE n ( select count(*) from ( WITH RECURSIVE t(n) AS ( SELECT 1 UNION ALL SELECT n + 1 FROM t ) SELECT * FROM t WHERE n 50) as t WHERE n 100) ; count --- 1 (1 row) if i am not totally wrong, this should give us a different result. i am looking forward to see this patch in core :). it is simply wonderful ... many thanks, hans On Jul 3, 2008, at 1:11 AM, David Fetter wrote: Folks, Please find patch enclosed, including some documentation. Can we see about getting this in this commitfest? Cheers, David. -- David Fetter [EMAIL PROTECTED] http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: [EMAIL PROTECTED] Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/ donaterecursive_query-7.patch.bz2 -- Sent via pgsql-hackers mailing list ([EMAIL PROTECTED]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers -- Cybertec Schönig Schönig GmbH PostgreSQL Solutions and Support Gröhrmühlgasse 26, 2700 Wiener Neustadt Tel: +43/1/205 10 35 / 340 www.postgresql-support.de, www.postgresql-support.com
Re: [HACKERS] [PATCHES] WITH RECURSIVE patch V0.1
Gregory Stark wrote: Joshua D. Drake [EMAIL PROTECTED] writes: Couldn't we just have it pay attention to the existing max_stack_depth? Recursive query does not consume stack. The server enters an infinite loop without consuming stack. Stack-depth error does not happen. We could have a separate guc variable which limits the maximum number of levels of recursive iterations. That might be a useful feature for DBAs that want to limit their users from issuing an infinite query. statement_timeout :) Good point. Though it occurs to me that if you set FETCH_COUNT in psql (or do the equivalent in your code ) statement_timeout becomes much less useful. i don't think statement_timeout is a good idea at all. it is not deterministic. depending on the load on the server some queries will execute while others fail. a separate GUC is needed. best regards, hans -- Cybertec Schönig Schönig GmbH PostgreSQL Solutions and Support Gröhrmühlgasse 26, A-2700 Wiener Neustadt Tel: +43/1/205 10 35 / 340 www.postgresql-support.de, www.postgresql-support.com -- Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-patches
Re: [PATCHES] 64-bit CommandIds
Alvaro Herrera wrote: Bruce Momjian wrote: I think the case for it got a whole lot weaker in 8.3, with lazy consumption of CIDs. Agreed. Let's see if we get requests for it in = 8.3 releases. In the original submission message you find this text: : attached is our patch against HEAD which enables extending CommandIds : to 64-bit. This is for enabling long transactions that really do that : much non-read-only work in one transaction. Question for Hans-Juergen and Zoltan: have you tested 8.3 and do you still see the need for this? good morning, i have seen this problem two or three times within the past 2-3 years or so. so, it can basically happen in the field for some special purpose applications but i don't see this as an every day problem. it would be nice to have it in. we could also go for some special contrib module which could contain a patch along with some documentation but i am not quite sure how this fits in there. we would of course maintain the patch. many thanks, hans -- Cybertec Schönig Schönig GmbH PostgreSQL Solutions and Support Gröhrmühlgasse 26, A-2700 Wiener Neustadt Tel: +43/1/205 10 35 / 340 www.postgresql-support.de, www.postgresql-support.com -- Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-patches
Re: [PATCHES] 64-bit CommandIds
Tom Lane wrote: Hans-Juergen Schoenig [EMAIL PROTECTED] writes: Alvaro Herrera wrote: Question for Hans-Juergen and Zoltan: have you tested 8.3 and do you still see the need for this? i have seen this problem two or three times within the past 2-3 years or so. so, it can basically happen in the field for some special purpose applications but i don't see this as an every day problem. it would be nice to have it in. So these experiences were pre-8.3, right? The reason that I'm harping on that is that plpgsql does a CommandCounterIncrement for each expression it evaluates, whether or not there's any visible database access. As of 8.3 that won't cause consumption of CIDs, but before it did. I suspect that in a lot of real-world scenarios, CID consumption from triggers will be down by an order of magnitude. regards, tom lane we found those problems when we dealt with stored procedures basically (during huge analysis and data modification transactions - rollup and materialize stuff, basically). i would think as well that it should have improved a lot. i have not tested with 8.3, however. on 8.2 it took something like 18 hours to reach that threshold - just to give you an impression. many thanks, hans -- Cybertec Schönig Schönig GmbH PostgreSQL Solutions and Support Gröhrmühlgasse 26, A-2700 Wiener Neustadt Tel: +43/1/205 10 35 / 340 www.postgresql-support.de, www.postgresql-support.com -- Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-patches
Re: [PATCHES] stored procedure stats in collector
On Mar 23, 2008, at 9:25 PM, Volkan YAZICI wrote: Hi, On Sun, 23 Mar 2008, Martin Pihlak [EMAIL PROTECTED] writes: Attached is a patch that enables tracking function calls through the stats subsystem. Original discussion: http://archives.postgresql.org/pgsql-hackers/2007-07/msg00377.php Introduces new guc variable - track_functions. Possible values are: none - no collection, default pl - tracks procedural language functions all - tracks procedural, SQL and C (not internal) functions I might have missed the discussion, but I'd love to see a more flexible interface for configuration parameters. For instance, it'd be great if we can specify which procedural languages to track in the `pl' GUC. Moreover, if it'd be possible to specify which specific functions we want to try, then that would be awesome as well. For instance, possible configuration combinations for track_functions can be: `pl:*'- Tracks procedural, SQL and C (not internal) functions in the `public' schema. `pl:pgsql'- Tracks only PL/pgSQL functions. `pl:scheme' - Tracks only PL/scheme functions. `foo(int, int)' - Tracks related `foo' function in the public schema. `stock.foo(int, int)' - Tracks related `foo' function in the `stock' schema. `pl:stock.*' - Tracks procedural, SQL and C (not internal) functions in the `stock' schema. Syntax can obviously be much more consistent. (These are just what I come up with for the very moment.) And I'm aware of the overhead and complexity(?) this sort of scheme will bring, but I really want to use such a useful feature with mentioned flexibilities. this patch is quite cool already. it would be even cooler if we could define on a per-function basis. eg. CREATE FUNCTION ... TRACK | NOTRACK in addition to that we could define a GUC defining whether TRACK or NOTRACK is used as default. in many cases you are only interested in a special set of functions anyway. as every operator is basically a procedure in postgres, i am not quite happy about the per-language approach. best regards, hans -- Cybertec Schönig Schönig GmbH PostgreSQL Solutions and Support Gröhrmühlgasse 26, 2700 Wiener Neustadt Tel: +43/1/205 10 35 / 340 www.postgresql-support.de, www.postgresql-support.com
Re: [PATCHES] Endless recovery
On Feb 11, 2008, at 10:26 AM, Heikki Linnakangas wrote: Hans-Juergen Schoenig wrote: Last week we have seen a problem with some horribly configured machine. The disk filled up (bad FSM ;) ) and once this happened the sysadmi killed the system (-9). After two days PostgreSQL has still not started up and they tried to restart it again and again making sure that the consistency check was started over an over again (thus causing more and more downtime). From the admi point of view there was no way to find out whether the machine was actually dead or still recovering. Here is a small patch which issues a log message indicating that the recovery process can take ages. Maybe this can prevent some admis from interrupting the recovery process. Wait, are you saying that the time was spent in the rm_cleanup phase? That sounds unbelievable. Surely the time was spent in the redo phase, no? it was a seek heavy workload, with backtraces like this one [EMAIL PROTECTED]:/srv/daten$ gdb --batch -x gdb.cmd /usr/local/ pgsql-8.1.11/bin/postmaster 15727 Using host libthread_db library /lib/tls/i686/cmov/libthread_db.so.1. [Thread debugging using libthread_db enabled] [New Thread -1214167376 (LWP 15727)] 0xe410 in __kernel_vsyscall () #0 0xe410 in __kernel_vsyscall () #1 0xb7c55c53 in read () from /lib/tls/i686/cmov/libc.so.6 #2 0x081b11e2 in FileRead (file=54, buffer=0xa7bb7540 1, amount=8192) at fd.c:1015 #3 0x081be33f in mdread (reln=0x8371e90, blocknum=6553044, buffer=0xa7bb7540 1) at md.c:497 #4 0x081be88a in smgrread (reln=0x8371e90, blocknum=6553044, buffer=0xa7bb7540 1) at smgr.c:527 #5 0x081ae4af in ReadBuffer (reln=0x834f0d0, blockNum=6553044) at bufmgr.c:252 #6 0x080b9869 in XLogReadBuffer (extend=0 '\0', reln=0x834f0d0, blkno=6553044) at xlogutils.c:57 #7 0x0808b3a2 in gistXLogReadAndLockBuffer (r=0x834f0d0, blkno=6553044) at gistxlog.c:530 #8 0x08087641 in gistFindPath (r=0x834f0d0, child=8558565, myReadBuffer=0x808b380 gistXLogReadAndLockBuffer) at gist.c:677 #9 0x0808c7bb in gist_xlog_cleanup () at gistxlog.c:549 #10 0x080b7c5c in StartupXLOG () at xlog.c:4769 #11 0x080c2635 in BootstrapMain (argc=4, argv=value optimized out) at bootstrap.c:428 #12 0x0819a2cc in StartChildProcess (xlop=2) at postmaster.c:3581 #13 0x0819cb38 in PostmasterMain (argc=1, argv=0x83275a8) at postmaster.c:931 #14 0x0815f909 in main (argc=1, argv=0x6a27e4b4) at main.c:265 redo was done fast ... [2008-02-07 22:24:50 CET ]LOG: database system was interrupted while in recovery at 2008-02-04 11:09:04 CET [2008-02-07 22:24:50 CET ]HINT: This probably means that some data is corrupted and you will have to use the last backup for recovery. [2008-02-07 22:24:50 CET ]LOG: checkpoint record is at 35/3BAA131C [2008-02-07 22:24:50 CET ]LOG: redo record is at 35/3BA11AB4; undo record is at 0/0; shutdown FALSE [2008-02-07 22:24:50 CET ]LOG: next transaction ID: 194549334; next OID: 16586886 [2008-02-07 22:24:50 CET ]LOG: next MultiXactId: 1; next MultiXactOffset: 0 [2008-02-07 22:24:50 CET ]LOG: database system was not properly shut down; automatic recovery in progress [2008-02-07 22:24:50 CET ]LOG: redo starts at 35/3BA11AB4 [2008-02-07 22:24:53 CET ]LOG: record with zero length at 35/3C8317C8 [2008-02-07 22:24:53 CET ]LOG: redo done at 35/3C8317A0 10:40 AM note that redo was finished fast ... In our case, the recovery process took 3.5 days !! That's a ridiculously long time. Was this a normal recovery, not a PITR archive recovery? Any idea why the recovery took so long? Given the max. checkpoint timeout of 1h, I would expect that the recovery would take a maximum of few hours even with an extremely write-heavy workload. there was no PITR in place - just a normal DB ... my first idea when they called me was that it must be related to checkpoint_segments - maybe some 2mio segments and some insanely long timeout. but no, this was not the case. segments were at 12 and the timeout was just several minutes. basically from outside everything was looking fine ... we used a binary copy of the data on two boxes (one for debugging). the entire process worked like a charm - it just took ages. we have seen a lot of random I/O here. this was quite a small machine with insanely small memory settings. no errors have been issued during the process - all fine; just long ... the DB version is 8.1.11. The entire DB is 116gb. It is more or less a table along with a 65 GB Gist index. hans -- Cybertec Schönig Schönig GmbH PostgreSQL Solutions and Support Gröhrmühlgasse 26, 2700 Wiener Neustadt Tel: +43/1/205 10 35 / 340 www.postgresql.at, www.cybertec.at
[PATCHES] Endless recovery
Last week we have seen a problem with some horribly configured machine.The disk filled up (bad FSM ;) ) and once this happened the sysadmi killed the system (-9).After two days PostgreSQL has still not started up and they tried to restart it again and again making sure that the consistency check was started over an over again (thus causing more and more downtime).From the admi point of view there was no way to find out whether the machine was actually dead or still recovering.Here is a small patch which issues a log message indicating that the recovery process can take ages.Maybe this can prevent some admis from interrupting the recovery process.In our case, the recovery process took 3.5 days !! best regards, hans xlog-recovery-warning.patch Description: Binary data --Cybertec Schönig Schönig GmbHPostgreSQL Solutions and SupportGröhrmühlgasse 26, 2700 Wiener NeustadtTel: +43/1/205 10 35 / 340www.postgresql.at, www.cybertec.at
[PATCHES] CONNECT BY for 8.3 ...
good morning everybody,this is a working version of evgen potemkin's patch implementing CONNECT BY. it has been ported to 8.3 and it seems to work flawlessly with CVS head.it is not supposed to be included into 8.4 but i guess it might be useful for some people.there are still some limitations and design issues along with Oracle specific syntax.however, it can be very useful for special purpose applications as it is ways faster than connectby(). many thanks and best regards, hans hier-Pg8.3.tgz Description: Binary data --Cybertec Schönig Schönig GmbHPostgreSQL Solutions and SupportGröhrmühlgasse 26, 2700 Wiener NeustadtTel: +43/1/205 10 35 / 340www.postgresql.at, www.cybertec.at
Re: [PATCHES] [HACKERS] Performance testing of COPY (SELECT) TO
Remember that we were talking about supporting views, not tables. And if a view uses a slow query then you are in immediate danger of having a slow COPY. This may not be a problem but it needs to be discussed and an agreement must be reached before introducing such a change (and not during feature freeze). this will definitely be the case - however, this is not what it was made for. it has not been made to be fast but it has been made to fulfill some other task. the reason why this has been implemented is: consider a large scale database containing hundreds of gigs of data. in our special case we have to export in a flexible way. the data which has to be exported comes from multiple tables (between 3 and 7 depending on the data we are looking at in this project. the export has to be performed in a flexible way and it needs certain parameters. defining tmp tables and store the data in there is simply not nice at all. in most cases exports want to transform data on the fly - speed is not as important as flexibility here. so in my view the speed argument does not matter. if somebody passes a stupid query to copy he will get stupid runtimes - just like on ordinary sql. however, we can use COPY's capabilities to format / escape data to make exports more flexible. so basically it is a win. best regards, hans -- Cybertec Geschwinde Schönig GmbH Schöngrabern 134; A-2020 Hollabrunn Tel: +43/1/205 10 35 / 340 www.postgresql.at, www.cybertec.at ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] [HACKERS] Performance testing of COPY (SELECT) TO
Alvaro Herrera wrote: Hans-Juergen Schoenig wrote: Remember that we were talking about supporting views, not tables. And if a view uses a slow query then you are in immediate danger of having a slow COPY. This may not be a problem but it needs to be discussed and an agreement must be reached before introducing such a change (and not during feature freeze). this will definitely be the case - however, this is not what it was made for. it has not been made to be fast but it has been made to fulfill some other task. the reason why this has been implemented is: consider a large scale database containing hundreds of gigs of data. in our special case we have to export in a flexible way. the data which has to be exported comes from multiple tables (between 3 and 7 depending on the data we are looking at in this project. the export has to be performed in a flexible way and it needs certain parameters. defining tmp tables and store the data in there is simply not nice at all. in most cases exports want to transform data on the fly - speed is not as important as flexibility here. My question is, if we allow this: copy (select * from view) to stdout; (or to a file, whatever), is it enough for you? Or would you insist on also having copy view to stdout; ? i would say that copy view to stdout is just some syntactic sugar (to me at least). the important thing is that we add the flexibility of SELECT to it. a view is nothing else than a rule on SELECT anyway. to be honest i never thought about views when creating this copy idea. however, i think it is not bad to have it because i have seen a couple of times already that tables turn into views when new features are added to an existing data structure . if we support copy on views this means that exports can stay as they are even if the data structure is changed in that way. however, if people think that views are not needed that way it is still a good solution as views are not the basic reason why this new functionality is a good thing to have. many thanks, hans -- Cybertec Geschwinde Schönig GmbH Schöngrabern 134; A-2020 Hollabrunn Tel: +43/1/205 10 35 / 340 www.postgresql.at, www.cybertec.at ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PATCHES] [HACKERS] Performance testing of COPY (SELECT) TO
Remember that we were talking about supporting views, not tables. And if a view uses a slow query then you are in immediate danger of having a slow COPY. This may not be a problem but it needs to be discussed and an agreement must be reached before introducing such a change (and not during feature freeze). COPY relname TO meant tables _and_ views to me. My previous tsting showed no difference between COPY table TO and COPY (SELECT * FROM table) TO. Similarly a slow query defined in the view should show no difference between COPY view TO and COPY (SELECT * FROM view) TO. The difference is that we are giving a very clear distinction between a table and a view. If we don't support the view in the direct COPY, but instead insist that it be passed via a SELECT query, then the user will be aware that it may be slow. what kind of clever customers do you have in the US? ;) i would never say something like that here :). i see your point and i think it is not a too bad idea. at least some folks might see that there is no voodoo going on ... relname at this point may mean anything -- are you supporting sequences and toast tables as well? good point ... It's ugly because you are forcing the system to parse something that was already parsed. definitely an argument for dropping the view stuff ... On the other hand I don't see why you are arguing in favor of a useless feature whose coding is dubious; you can have _the same thing_ with nice code and no discussion. what are you referring to? hans -- Cybertec Geschwinde Schönig GmbH Schöngrabern 134; A-2020 Hollabrunn Tel: +43/1/205 10 35 / 340 www.postgresql.at, www.cybertec.at ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [PATCHES] [HACKERS] Performance testing of COPY (SELECT) TO
On the other hand I don't see why you are arguing in favor of a useless feature whose coding is dubious; you can have _the same thing_ with nice code and no discussion. Because of [1] and because Mr. Schoenig's arguments about changing schemas. first of all; hans is enough - skip the mr ;) i think changing schema is a good argument but we could sacrifice that for the sake of clarity and clean code. i am not against keeping it but i can understand the argument against views. i always preferred select. mr hans ;) -- Cybertec Geschwinde Schönig GmbH Schöngrabern 134; A-2020 Hollabrunn Tel: +43/1/205 10 35 / 340 www.postgresql.at, www.cybertec.at ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [PATCHES] Implementing SELECT FOR UPDATE [NOWAIT]
yes, i think we can do it in time. regards, hans Bruce Momjian wrote: Are you working on a updated version of this? --- Bruce Momjian wrote: Uh, seems the code has drifted too much and now I can't apply this. Would you redo this against current CVS? Thanks. --- Hans-Juergen Schoenig wrote: Folks, We have implemented SELECT FOR UPDATE NOWAIT for PostgreSQL. The patch attached to this email contains all the required code including ECPG updates and some documentation. It would be nice if this patch would be included in PostgreSQL 8.1 Best regards, Ewald Geschwinde Hans-Juergen Schoenig -- Cybertec Geschwinde u Schoenig GmbH. Schoengrabern 134, A-2020 Hollabrunn, Austria Tel: +43/660/816 40 77 www.cybertec.at, www.postgresql.at [ text/x-patch is unsupported, treating like TEXT/PLAIN ] diff -r -c postgresql-8.0.0rc2.orig/doc/src/sgml/ref/select.sgml postgresql-8.0.0rc2/doc/src/sgml/ref/select.sgml *** postgresql-8.0.0rc2.orig/doc/src/sgml/ref/select.sgml Sat Nov 27 22:27:07 2004 --- postgresql-8.0.0rc2/doc/src/sgml/ref/select.sgmlMon Dec 27 10:57:05 2004 *** *** 30,36 [ ORDER BY replaceable class=parameterexpression/replaceable [ ASC | DESC | USING replaceable class=parameteroperator/replaceable ] [, ...] ] [ LIMIT { replaceable class=parametercount/replaceable | ALL } ] [ OFFSET replaceable class=parameterstart/replaceable ] ! [ FOR UPDATE [ OF replaceable class=parametertable_name/replaceable [, ...] ] ] where replaceable class=parameterfrom_item/replaceable can be one of: --- 30,36 [ ORDER BY replaceable class=parameterexpression/replaceable [ ASC | DESC | USING replaceable class=parameteroperator/replaceable ] [, ...] ] [ LIMIT { replaceable class=parametercount/replaceable | ALL } ] [ OFFSET replaceable class=parameterstart/replaceable ] ! [ FOR UPDATE [ OF replaceable class=parametertable_name/replaceable [, ...] ] [NOWAIT] ] where replaceable class=parameterfrom_item/replaceable can be one of: *** *** 772,778 para The literalFOR UPDATE/literal clause has this form: synopsis ! FOR UPDATE [ OF replaceable class=parametertable_name/replaceable [, ...] ] /synopsis /para --- 772,778 para The literalFOR UPDATE/literal clause has this form: synopsis ! FOR UPDATE [ OF replaceable class=parametertable_name/replaceable [, ...] ] [NOWAIT] /synopsis /para *** *** 789,796 has already locked a selected row or rows, commandSELECT FOR UPDATE/command will wait for the other transaction to complete, and will then lock and return the updated row (or no row, if the ! row was deleted). For further discussion see xref ! linkend=mvcc. /para para --- 789,802 has already locked a selected row or rows, commandSELECT FOR UPDATE/command will wait for the other transaction to complete, and will then lock and return the updated row (or no row, if the ! row was deleted). If the current transaction is not supposed to ! wait on other transactions to commit the NOWAIT option can be ! used. If commandSELECT FOR UPDATE NOWAIT/command finds out ! that somebody else is holding a lock an error will be thrown. ! This will only happen in case of row level locks - if somebody ! holds a table lock commandSELECT FOR UPDATE NOWAIT/command ! will still wait for concurrent transactions. For further ! discussion see xref linkend=mvcc. /para para diff -r -c postgresql-8.0.0rc2.orig/doc/src/sgml/sql.sgml postgresql-8.0.0rc2/doc/src/sgml/sql.sgml *** postgresql-8.0.0rc2.orig/doc/src/sgml/sql.sgml Mon Nov 15 07:32:14 2004 --- postgresql-8.0.0rc2/doc/src/sgml/sql.sgml Mon Dec 27 10:57:05 2004 *** *** 866,872 [ ORDER BY replaceable class=PARAMETERexpression/replaceable [ ASC | DESC | USING replaceable class=PARAMETERoperator/replaceable ] [, ...] ] [ LIMIT { replaceable class=PARAMETERcount/replaceable | ALL } ] [ OFFSET replaceable class=PARAMETERstart/replaceable ] ! [ FOR UPDATE [ OF replaceable class=PARAMETERclass_name/replaceable [, ...] ] ] /synopsis /para --- 866,872 [ ORDER BY replaceable class=PARAMETERexpression/replaceable [ ASC | DESC | USING replaceable class=PARAMETERoperator/replaceable ] [, ...] ] [ LIMIT { replaceable class=PARAMETERcount/replaceable | ALL } ] [ OFFSET replaceable class=PARAMETERstart/replaceable ] ! [ FOR UPDATE [ OF replaceable class=PARAMETERclass_name/replaceable [, ...] ] [NOWAIT] ] /synopsis /para diff -r -c postgresql-8.0.0rc2.orig/src/backend/access/heap/heapam.c postgresql-8.0.0rc2/src/backend/access/heap/heapam.c *** postgresql
[PATCHES] Implementing SELECT FOR UPDATE [NOWAIT]
Folks, We have implemented SELECT FOR UPDATE NOWAIT for PostgreSQL. The patch attached to this email contains all the required code including ECPG updates and some documentation. It would be nice if this patch would be included in PostgreSQL 8.1 Best regards, Ewald Geschwinde Hans-Juergen Schoenig -- Cybertec Geschwinde u Schoenig GmbH. Schoengrabern 134, A-2020 Hollabrunn, Austria Tel: +43/660/816 40 77 www.cybertec.at, www.postgresql.at diff -r -c postgresql-8.0.0rc2.orig/doc/src/sgml/ref/select.sgml postgresql-8.0.0rc2/doc/src/sgml/ref/select.sgml *** postgresql-8.0.0rc2.orig/doc/src/sgml/ref/select.sgml Sat Nov 27 22:27:07 2004 --- postgresql-8.0.0rc2/doc/src/sgml/ref/select.sgml Mon Dec 27 10:57:05 2004 *** *** 30,36 [ ORDER BY replaceable class=parameterexpression/replaceable [ ASC | DESC | USING replaceable class=parameteroperator/replaceable ] [, ...] ] [ LIMIT { replaceable class=parametercount/replaceable | ALL } ] [ OFFSET replaceable class=parameterstart/replaceable ] ! [ FOR UPDATE [ OF replaceable class=parametertable_name/replaceable [, ...] ] ] where replaceable class=parameterfrom_item/replaceable can be one of: --- 30,36 [ ORDER BY replaceable class=parameterexpression/replaceable [ ASC | DESC | USING replaceable class=parameteroperator/replaceable ] [, ...] ] [ LIMIT { replaceable class=parametercount/replaceable | ALL } ] [ OFFSET replaceable class=parameterstart/replaceable ] ! [ FOR UPDATE [ OF replaceable class=parametertable_name/replaceable [, ...] ] [NOWAIT] ] where replaceable class=parameterfrom_item/replaceable can be one of: *** *** 772,778 para The literalFOR UPDATE/literal clause has this form: synopsis ! FOR UPDATE [ OF replaceable class=parametertable_name/replaceable [, ...] ] /synopsis /para --- 772,778 para The literalFOR UPDATE/literal clause has this form: synopsis ! FOR UPDATE [ OF replaceable class=parametertable_name/replaceable [, ...] ] [NOWAIT] /synopsis /para *** *** 789,796 has already locked a selected row or rows, commandSELECT FOR UPDATE/command will wait for the other transaction to complete, and will then lock and return the updated row (or no row, if the ! row was deleted). For further discussion see xref ! linkend=mvcc. /para para --- 789,802 has already locked a selected row or rows, commandSELECT FOR UPDATE/command will wait for the other transaction to complete, and will then lock and return the updated row (or no row, if the ! row was deleted). If the current transaction is not supposed to ! wait on other transactions to commit the NOWAIT option can be ! used. If commandSELECT FOR UPDATE NOWAIT/command finds out ! that somebody else is holding a lock an error will be thrown. ! This will only happen in case of row level locks - if somebody ! holds a table lock commandSELECT FOR UPDATE NOWAIT/command ! will still wait for concurrent transactions. For further ! discussion see xref linkend=mvcc. /para para diff -r -c postgresql-8.0.0rc2.orig/doc/src/sgml/sql.sgml postgresql-8.0.0rc2/doc/src/sgml/sql.sgml *** postgresql-8.0.0rc2.orig/doc/src/sgml/sql.sgml Mon Nov 15 07:32:14 2004 --- postgresql-8.0.0rc2/doc/src/sgml/sql.sgml Mon Dec 27 10:57:05 2004 *** *** 866,872 [ ORDER BY replaceable class=PARAMETERexpression/replaceable [ ASC | DESC | USING replaceable class=PARAMETERoperator/replaceable ] [, ...] ] [ LIMIT { replaceable class=PARAMETERcount/replaceable | ALL } ] [ OFFSET replaceable class=PARAMETERstart/replaceable ] ! [ FOR UPDATE [ OF replaceable class=PARAMETERclass_name/replaceable [, ...] ] ] /synopsis /para --- 866,872 [ ORDER BY replaceable class=PARAMETERexpression/replaceable [ ASC | DESC | USING replaceable class=PARAMETERoperator/replaceable ] [, ...] ] [ LIMIT { replaceable class=PARAMETERcount/replaceable | ALL } ] [ OFFSET replaceable class=PARAMETERstart/replaceable ] ! [ FOR UPDATE [ OF replaceable class=PARAMETERclass_name/replaceable [, ...] ] [NOWAIT] ] /synopsis /para diff -r -c postgresql-8.0.0rc2.orig/src/backend/access/heap/heapam.c postgresql-8.0.0rc2/src/backend/access/heap/heapam.c *** postgresql-8.0.0rc2.orig/src/backend/access/heap/heapam.c Sun Nov 14 03:04:12 2004 --- postgresql-8.0.0rc2/src/backend/access/heap/heapam.c Mon Dec 27 10:56:52 2004 *** *** 16,21 --- 16,22 * relation_openrv - open any relation specified by a RangeVar * relation_openr - open a system relation by name * relation_close - close any relation + * conditional_relation_open - open with option not to wait * heap_open - open a heap relation by relation OID * heap_openrv - open a heap relation specified