[SQL] Dynamic Query for System functions - now()
Dear Friends, I am using PostgreSQL 7.3.4 Server on RH Linux 7.2. I am trying to generate a dynamic query to fetch the next month interval. select now()+ interval'1 month'; -- This is working fine. I wanna dynamically assign the interval number. i,e --> select now()+ interval'n month'; For this I wanted to write a dynamic query. EXECUTE 'select now()+ interval\'' || 3|| 'month\''; Error ERROR: parser: parse error at or near "'select now()+ interval\''" at character 9 Help help me with this. I wanted to use this query inside a PLpgSQL function. Regards Kumar
Re: [SQL] Dynamic Query for System functions - now()
> > Dear Friends, > > I am using PostgreSQL 7.3.4 Server on RH Linux 7.2. I am trying to generate= > a dynamic query to fetch the next month interval. > > select now()+ interval'1 month'; -- This is working fine. > > I wanna dynamically assign the interval number. i,e --> select now()+ inter= > val'n month'; > > For this I wanted to write a dynamic query. > EXECUTE 'select now()+ interval\'' || 3|| 'month\''; > Error > ERROR: parser: parse error at or near "'select now()+ interval\''" at = > character 9 > > Help help me with this. I wanted to use this query inside a PLpgSQL functio= > n. > > Regards > Kumar > You'd probably want to use the to_char() function like this EXECUTE 'select now()+ interval\'' ||to_char(3,'9')|| 'month\''; The || operator is the string concatination operator. Regards, Christoph ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[SQL] FOR : Structure control pb
I am using the For controle structure as follows FOR i IN 0..23 LOOP query END LOOP; The query is tested and is working OK. When I launch the loop in psql interface, I get the error parse error at or near FOR. As this is the first time I use this kind of struture I am probably using it the wrong way. Thank you for your answer
Re: [SQL] Dynamic Query for System functions - now()
Dear Christoph Haller, The code that u sent is not working test=> EXECUTE 'select now()+ interval\'' ||to_char(3,'9')|| 'month\''; ERROR: parser: parse error at or near "'select now()+ interval\''" at character 9 test=> Also I tried just to run the now() function with a dynamic query, I got the following error. Please correct me. test=> select now(); now --- 2003-11-10 17:06:36.783779+00 (1 row) test=> execute 'select now()'; ERROR: parser: parse error at or near "'select now()'" at character 9 test=> Regards kumar - Original Message - From: "Christoph Haller" <[EMAIL PROTECTED]> To: ""Kumar"" <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]> Sent: Monday, November 10, 2003 5:07 PM Subject: Re: [SQL] Dynamic Query for System functions - now() > > > > Dear Friends, > > > > I am using PostgreSQL 7.3.4 Server on RH Linux 7.2. I am trying to generate= > > a dynamic query to fetch the next month interval. > > > > select now()+ interval'1 month'; -- This is working fine. > > > > I wanna dynamically assign the interval number. i,e --> select now()+ inter= > > val'n month'; > > > > For this I wanted to write a dynamic query. > > EXECUTE 'select now()+ interval\'' || 3|| 'month\''; > > Error > > ERROR: parser: parse error at or near "'select now()+ interval\''" at = > > character 9 > > > > Help help me with this. I wanted to use this query inside a PLpgSQL functio= > > n. > > > > Regards > > Kumar > > > You'd probably want to use the to_char() function like this > EXECUTE 'select now()+ interval\'' ||to_char(3,'9')|| 'month\''; > > The || operator is the string concatination operator. > > Regards, Christoph ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] FOR : Structure control pb
On Monday 10 November 2003 11:24, Guillaume Houssay wrote: > I am using the For controle structure as follows > > FOR i IN 0..23 LOOP > > query > > END LOOP; > > The query is tested and is working OK. > > When I launch the loop in psql interface, I get the error parse error at or > near FOR. The FOR loop is a plpgsql block not an SQL one. You'll need to write a plpgsql function if you want to use it. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Dynamic Query for System functions - now()
Kumar, why dont you try prepare fooplan2(int4) as select now() + ($1||' month')::interval; execute fooplan2(12); O kyrios Kumar egrapse stis Nov 10, 2003 : > Dear Christoph Haller, > > The code that u sent is not working > test=> EXECUTE 'select now()+ interval\'' ||to_char(3,'9')|| 'month\''; > ERROR: parser: parse error at or near "'select now()+ interval\''" at > character 9 > test=> > > Also I tried just to run the now() function with a dynamic query, I got the > following error. Please correct me. > > test=> select now(); > now > --- > 2003-11-10 17:06:36.783779+00 > (1 row) > > test=> execute 'select now()'; > ERROR: parser: parse error at or near "'select now()'" at character 9 > test=> > > > Regards > kumar > > - Original Message - > From: "Christoph Haller" <[EMAIL PROTECTED]> > To: ""Kumar"" <[EMAIL PROTECTED]> > Cc: <[EMAIL PROTECTED]> > Sent: Monday, November 10, 2003 5:07 PM > Subject: Re: [SQL] Dynamic Query for System functions - now() > > > > > > > > Dear Friends, > > > > > > I am using PostgreSQL 7.3.4 Server on RH Linux 7.2. I am trying to > generate= > > > a dynamic query to fetch the next month interval. > > > > > > select now()+ interval'1 month'; -- This is working fine. > > > > > > I wanna dynamically assign the interval number. i,e --> select now()+ > inter= > > > val'n month'; > > > > > > For this I wanted to write a dynamic query. > > > EXECUTE 'select now()+ interval\'' || 3|| 'month\''; > > > Error > > > ERROR: parser: parse error at or near "'select now()+ interval\''" > at = > > > character 9 > > > > > > Help help me with this. I wanted to use this query inside a PLpgSQL > functio= > > > n. > > > > > > Regards > > > Kumar > > > > > You'd probably want to use the to_char() function like this > > EXECUTE 'select now()+ interval\'' ||to_char(3,'9')|| 'month\''; > > > > The || operator is the string concatination operator. > > > > Regards, Christoph > > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster > -- -Achilleus ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] Dynamic Query for System functions - now()
I like Achilleus' proposal for its elegance, just in case you are interested in the old-school plpgsql way: create or replace function future_date(int) returns timestamp as 'declare future_dat timestamp; future_off text; begin future_dat := ''now'' ; future_off := $1::text; future_off := future_off ||'' month''; return future_dat + future_off::interval; END;' LANGUAGE 'plpgsql' ; select future_date(9); future_date 2004-08-10 13:57:12.700499 (1 row) Regards, Christoph > > Kumar, > why dont you try > > prepare fooplan2(int4) as select now() + ($1||' month')::interval; > execute fooplan2(12); > > > > O kyrios Kumar egrapse stis Nov 10, 2003 : > > > Dear Christoph Haller, > > > > The code that u sent is not working > > test=> EXECUTE 'select now()+ interval\'' ||to_char(3,'9')|| 'month\''; > > ERROR: parser: parse error at or near "'select now()+ interval\''" at > > character 9 > > test=> > > > > Also I tried just to run the now() function with a dynamic query, I got the > > following error. Please correct me. > > > > test=> select now(); > > now > > --- > > 2003-11-10 17:06:36.783779+00 > > (1 row) > > > > test=> execute 'select now()'; > > ERROR: parser: parse error at or near "'select now()'" at character 9 > > test=> > > > > > > Regards > > kumar > > > > - Original Message - > > From: "Christoph Haller" <[EMAIL PROTECTED]> > > To: ""Kumar"" <[EMAIL PROTECTED]> > > Cc: <[EMAIL PROTECTED]> > > Sent: Monday, November 10, 2003 5:07 PM > > Subject: Re: [SQL] Dynamic Query for System functions - now() > > > > > > > > > > > > Dear Friends, > > > > > > > > I am using PostgreSQL 7.3.4 Server on RH Linux 7.2. I am trying to > > generate= > > > > a dynamic query to fetch the next month interval. > > > > > > > > select now()+ interval'1 month'; -- This is working fine. > > > > > > > > I wanna dynamically assign the interval number. i,e --> select now()+ > > inter= > > > > val'n month'; > > > > > > > > For this I wanted to write a dynamic query. > > > > EXECUTE 'select now()+ interval\'' || 3|| 'month\''; > > > > Error > > > > ERROR: parser: parse error at or near "'select now()+ interval\''" > > at = > > > > character 9 > > > > > > > > Help help me with this. I wanted to use this query inside a PLpgSQL > > functio= > > > > n. > > > > > > > > Regards > > > > Kumar > > > > > > > You'd probably want to use the to_char() function like this > > > EXECUTE 'select now()+ interval\'' ||to_char(3,'9')|| 'month\''; > > > > > > The || operator is the string concatination operator. > > > > > > Regards, Christoph > > > > > > ---(end of broadcast)--- > > TIP 4: Don't 'kill -9' the postmaster > > > > -- > -Achilleus > ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] help me...
That's what I said! :) Yasir On Sun, 9 Nov 2003, Christopher Browne wrote: > Date: Sun, 09 Nov 2003 21:59:14 -0500 > From: Christopher Browne <[EMAIL PROTECTED]> > To: [EMAIL PROTECTED] > Subject: Re: [SQL] help me... > > The world rejoiced as [EMAIL PROTECTED] (ron_tabada) wrote: > > Hello everyone, Good day! Could anyone help me translate this query > > in Microsoft Access to Postgresql. I'm having a difficulty. Pls... > > > > Query1: > > SELECT items.description, Sum(supplieditems.qty) AS SumOfqty > > FROM items INNER JOIN supplieditems ON items.itemno = supplieditems.itemno > > GROUP BY items.description; > > > > Query2: > > SELECT [items].[description], Sum([customer].[qty]) AS SumOfqty > > FROM (items INNER JOIN OtherItem ON [items].[itemno]=[OtherItem].[Itemno]) INNER > > JOIN customer ON > > [OtherItem].[Itemno]=[customer].[itemcode] > > GROUP BY [items].[description]; > > > > Query3: > > SELECT [Query1].[SumOfqty], [Query2].[SumOfqty], > > [Query1]![SumOfqty]-[Query2]![SumOfqty] AS remain > > FROM Query1, Query2; > > > > I have translated Query1 and Query2 in POSTGRESQL but I don't know > > how to implement Query3. > > Apparently you have discovered the nearest equivalent to "VIEWs" in > Access. > > I can suggest two ways: > > 1. Define "query1" and "query2" as PostgreSQL views, as with... > > create view query1 as > SELECT items.description, Sum(supplieditems.qty) AS SumOfqty > FROM items INNER JOIN supplieditems ON items.itemno = supplieditems.itemno > GROUP BY items.description; > > create view query2 as [omitted details]. > > Query 3 should work perfectly well when it has the two views to work > with. > > 2. Subselects... > > SELECT Query1.SumOfqty, Query2.SumOfqty, Query1.SumOfqty-Query2.SumOfqty AS remain > FROM > (select stuff for query 1) as query1, > (select stuff for query 2) as query2; > > Approach #1. seems more appropriate, as it uses the views to keep the > queries all simple. > -- > let name="cbbrowne" and tld="acm.org" in String.concat "@" [name;tld];; > http://www.ntlug.org/~cbbrowne/nonrdbms.html > Incrementally extended heuristic algorithms tend inexorably toward the > incomprehensible. > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Dynamic Query for System functions - now()
Kumar, What about this: EXECUTE 'select now()+ interval \' || to_char(3,\'9\') || \'month\' '; George - Original Message - From: "Kumar" <[EMAIL PROTECTED]> To: "Christoph Haller" <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]> Sent: Monday, November 10, 2003 5:57 AM Subject: Re: [SQL] Dynamic Query for System functions - now() > Dear Christoph Haller, > > The code that u sent is not working > test=> EXECUTE 'select now()+ interval\'' ||to_char(3,'9')|| 'month\''; > ERROR: parser: parse error at or near "'select now()+ interval\''" at > character 9 > test=> > > Also I tried just to run the now() function with a dynamic query, I got the > following error. Please correct me. > > test=> select now(); > now > --- > 2003-11-10 17:06:36.783779+00 > (1 row) > > test=> execute 'select now()'; > ERROR: parser: parse error at or near "'select now()'" at character 9 > test=> > > > Regards > kumar > > - Original Message - > From: "Christoph Haller" <[EMAIL PROTECTED]> > To: ""Kumar"" <[EMAIL PROTECTED]> > Cc: <[EMAIL PROTECTED]> > Sent: Monday, November 10, 2003 5:07 PM > Subject: Re: [SQL] Dynamic Query for System functions - now() > > > > > > > > Dear Friends, > > > > > > I am using PostgreSQL 7.3.4 Server on RH Linux 7.2. I am trying to > generate= > > > a dynamic query to fetch the next month interval. > > > > > > select now()+ interval'1 month'; -- This is working fine. > > > > > > I wanna dynamically assign the interval number. i,e --> select now()+ > inter= > > > val'n month'; > > > > > > For this I wanted to write a dynamic query. > > > EXECUTE 'select now()+ interval\'' || 3|| 'month\''; > > > Error > > > ERROR: parser: parse error at or near "'select now()+ interval\''" > at = > > > character 9 > > > > > > Help help me with this. I wanted to use this query inside a PLpgSQL > functio= > > > n. > > > > > > Regards > > > Kumar > > > > > You'd probably want to use the to_char() function like this > > EXECUTE 'select now()+ interval\'' ||to_char(3,'9')|| 'month\''; > > > > The || operator is the string concatination operator. > > > > Regards, Christoph > > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] Dynamic Query for System functions - now()
On Mon, 10 Nov 2003, Kumar wrote: > select now()+ interval'1 month'; -- This is working fine. > > I wanna dynamically assign the interval number. i,e --> select now()+ > interval'n month'; If n is an integer, I'd suggest something like select now() + n * interval '1 month'; rather than messing around with the text of the interval literal. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] transaction processing after error in statement
On Mon, 10 Nov 2003, Holger Jakobs wrote: > Hi Stephan, > > On 9 Nov, Stephan Szabo wrote: > > On Fri, 7 Nov 2003 [EMAIL PROTECTED] wrote: > > > >> Whenever an error occurs within the transaction, PostgreSQL puts the > >> whole transaction in an *ABORT* state, so that there is no difference > >> at all between COMMITing or ROLLBACKing it. Even commands > >> successfully carried out before the error ocurred are rolled back, > >> even if I COMMIT the transaction, where no error message whatsoever > >> is shown. > > > > In PostgreSQL all errors are currently considered unrecoverable, and > > all statements in a transaction must commit or rollback together as a > > single unit. In the future an implementation of nested transactions or > > savepoints would presumably relax this limitation to only the > > successfully committed subtransactions or statements that were not > > separately rolled back to a previous savepoint. > > What I meant was not subtransactions or savepoints, but the funny > behaviour that operations already successfully carried out never will be > committed, just because some other operation later within the same > transaction fails. This is far different from the behaviour of all other > DMBS I know. Why not: > >begin work; >insert into x values (1, 'hi'); >--> success >insert into x values (1, 'there'); >--> failure due to primary key violation >insert into x values (2, 'foo'); >--> success >commit work; > > and have two new tuples in the table? Why do _all_ of these operations > have to be rolled back? I just don't get it that this has anything to do Right now there's a technical reason (the system won't handle partially completed statements reasonably - but nested transactions will give a way for that to work presumably hence their mention) and a logical reason (many of the developers seem believe that partial commit isn't a good behavior and that transactions should be all commit or all fail). Personally, I think it'd be nice to have some way to deal with errors other than retrying the whole sequence, but at the mean time we have a system which basically meets the words if not the intent of the spec. > Hopefully this can be cleared and perhaps improved within PostgreSQL. > Otherwise, PostgreSQL always claims to be close to standards. This is actually fairly standard complient. Note my message about errors being unrecoverable. The SQL spec allows an entire transaction to be rolled back upon unrecoverable errors. Our failed state behavior may not be (if we sent an error on commit, I believe it would be), but forcing the entire transaction to roll back is. That may not be what the spec intended, but it does seem to be allowed. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL]
You will have to use a WHERE clause or a JOIN in query three, so you will have to include some field, such as itemno, in the result sets of both query1 and query2 on which you can join with where or join. Such as: Select query1.sumofqty as sum1, query2.sumofqty as sum2, query1.sumofqty - query2.sumofqty as remain from query1, query2 where query1. = query2. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED] Sent: Friday, November 07, 2003 11:52 PM To: [EMAIL PROTECTED] Subject: [SQL] Hello everyone, Good day! Could anyone help me translate this query in Microsoft Access to Postgresql. I'm having a difficulty. Pls... Query1: SELECT items.description, Sum(supplieditems.qty) AS SumOfqty FROM items INNER JOIN supplieditems ON items.itemno = supplieditems.itemno GROUP BY items.description; Query2: SELECT [items].[description], Sum([customer].[qty]) AS SumOfqty FROM (items INNER JOIN OtherItem ON [items].[itemno]=[OtherItem].[Itemno]) INNER JOIN customer ON [OtherItem].[Itemno]=[customer].[itemcode] GROUP BY [items].[description]; Query3: SELECT [Query1].[SumOfqty], [Query2].[SumOfqty], [Query1]![SumOfqty]-[Query2]![SumOfqty] AS remain FROM Query1, Query2; I have translated Query1 and Query2 in POSTGRESQL but I don't know how to implement Query3. Are you a student of MSU-IIT? Log on http://my.msuiit.edu.ph and sign up for your My.MSUIIT E-mail Plus account now! Are you an MSU-IIT alumnus? Log on http://www.alumni.msuiit.edu.ph ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] transaction processing after error in statement
Hi Stephan, On 9 Nov, Stephan Szabo wrote: > On Fri, 7 Nov 2003 [EMAIL PROTECTED] wrote: > >> Whenever an error occurs within the transaction, PostgreSQL puts the >> whole transaction in an *ABORT* state, so that there is no difference >> at all between COMMITing or ROLLBACKing it. Even commands >> successfully carried out before the error ocurred are rolled back, >> even if I COMMIT the transaction, where no error message whatsoever >> is shown. > > In PostgreSQL all errors are currently considered unrecoverable, and > all statements in a transaction must commit or rollback together as a > single unit. In the future an implementation of nested transactions or > savepoints would presumably relax this limitation to only the > successfully committed subtransactions or statements that were not > separately rolled back to a previous savepoint. What I meant was not subtransactions or savepoints, but the funny behaviour that operations already successfully carried out never will be committed, just because some other operation later within the same transaction fails. This is far different from the behaviour of all other DMBS I know. Why not: begin work; insert into x values (1, 'hi'); --> success insert into x values (1, 'there'); --> failure due to primary key violation insert into x values (2, 'foo'); --> success commit work; and have two new tuples in the table? Why do _all_ of these operations have to be rolled back? I just don't get it that this has anything to do with savepoints or so. I don't see any problem with an error being recoverable, because the second insert failed and does not have to be recovered while the first and the third worked fine and does not have to be recovered either. When committing a transaction the effects of all operations that did not fail will be made permanent. This is how transaction processing is described in the literature. If a programmer wants the whole transaction to fail because one part failed, (s)he can always program a rollback in case of at least one error. But there should always be a difference between a rollback and a commit, after at least one statement changing data has reported a success. Hopefully this can be cleared and perhaps improved within PostgreSQL. Otherwise, PostgreSQL always claims to be close to standards. Sincerely, Holger -- [EMAIL PROTECTED], Bergisch Gladbach, Germany Telefon (0 22 02) 5 99 91 oder (01 77) 7 92 24 66 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] help me...
Hi, I'm usually wrong, but you can create a view for Query1 and Query2. That is do create view Query1 [your query for query1]; create view Query2 [your query for query2]; Then what you are doing should work. Regards, Yasir On Sat, 8 Nov 2003, [iso-8859-1] ron_tabada wrote: > Date: Sat, 8 Nov 2003 06:02:15 + (GMT) > From: "[iso-8859-1] ron_tabada" <[EMAIL PROTECTED]> > Reply-To: [EMAIL PROTECTED] > To: [EMAIL PROTECTED] > Subject: [SQL] help me... > > Hello everyone, Good day! Could anyone help me translate this query in Microsoft > Access to Postgresql. I'm having a difficulty. Pls... > Query1: > SELECT items.description, Sum(supplieditems.qty) AS SumOfqty > FROM items INNER JOIN supplieditems ON items.itemno = supplieditems.itemno > GROUP BY items.description; > Query2: > SELECT [items].[description], Sum([customer].[qty]) AS SumOfqty > FROM (items INNER JOIN OtherItem ON [items].[itemno]=[OtherItem].[Itemno]) INNER > JOIN customer ON [OtherItem].[Itemno]=[customer].[itemcode] > GROUP BY [items].[description]; > Query3: > SELECT [Query1].[SumOfqty], [Query2].[SumOfqty], > [Query1]![SumOfqty]-[Query2]![SumOfqty] AS remain > FROM Query1, Query2; > I have translated Query1 and Query2 in POSTGRESQL but I don't know how to implement > Query3. > > > - > Want to chat instantly with your online friends? Get the FREE Yahoo!Messenger ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[SQL] FOR : control structure
I am using the For controle structure as follows FOR i IN 0..23 LOOP query END LOOP; The query is tested and is working OK. When I launch the loop in psql interface, I get the error parse error at or near FOR. As this is the first time I use this kind of struture I am probably using it the wrong way. Thank you for your answer
Re: [SQL] transaction processing after error in statement
Holger Jakobs wrote: Hi Stephan, On 9 Nov, Stephan Szabo wrote: On Fri, 7 Nov 2003 [EMAIL PROTECTED] wrote: Whenever an error occurs within the transaction, PostgreSQL puts the whole transaction in an *ABORT* state, so that there is no difference at all between COMMITing or ROLLBACKing it. Even commands successfully carried out before the error ocurred are rolled back, even if I COMMIT the transaction, where no error message whatsoever is shown. In PostgreSQL all errors are currently considered unrecoverable, and all statements in a transaction must commit or rollback together as a single unit. In the future an implementation of nested transactions or savepoints would presumably relax this limitation to only the successfully committed subtransactions or statements that were not separately rolled back to a previous savepoint. What I meant was not subtransactions or savepoints, but the funny behaviour that operations already successfully carried out never will be committed, just because some other operation later within the same transaction fails. This is far different from the behaviour of all other DMBS I know. Why not: Why is that "funny behaviour" for you? By putting the statements into a transaction block you told the data management system "I want this group of statements to be atomic". Atomic means all or nothing. It might not be exactly what you intended to say, and you have a point if you conclude that PostgreSQL is limited because it doesn't let you say anything in between all or nothing. But after all, thus far it is only doing what you asked for. Jan begin work; insert into x values (1, 'hi'); --> success insert into x values (1, 'there'); --> failure due to primary key violation insert into x values (2, 'foo'); --> success commit work; and have two new tuples in the table? Why do _all_ of these operations have to be rolled back? I just don't get it that this has anything to do with savepoints or so. I don't see any problem with an error being recoverable, because the second insert failed and does not have to be recovered while the first and the third worked fine and does not have to be recovered either. When committing a transaction the effects of all operations that did not fail will be made permanent. This is how transaction processing is described in the literature. If a programmer wants the whole transaction to fail because one part failed, (s)he can always program a rollback in case of at least one error. But there should always be a difference between a rollback and a commit, after at least one statement changing data has reported a success. Hopefully this can be cleared and perhaps improved within PostgreSQL. Otherwise, PostgreSQL always claims to be close to standards. Sincerely, Holger -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] transaction processing after error in statement
> be recovered either. When committing a transaction the effects of all > operations that did not fail will be made permanent. This is how > transaction processing is described in the literature. I would be interested in reading that (URLs please) as I didn't see anything in the spec that was interesting on this topic. 4.8.5 from Framework (part 01) An SQL-transaction (transaction) is a sequence of executions of SQL-statements that is atomic with respect to recovery. That is to say: either the execution result is completely successful, or it has no effect on any SQL-schemas or SQL-data. The "execution result is completely successful" could certainly be used to back up PostgreSQLs choice to force a rollback. However, it doesn't differentiate between execution of what the user requested, and execution of recovery procedures on the successful user elements. Irregardless, I wish a commit on a failed transaction would throw an error -- END is good enough for Rollback or Commit. For PostgreSQL to implement this we need Savepoints or nested transactions internally since in many cases data is physically written in order to perform things like Foreign Key constraint checks. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] Dynamic Query for System functions - now()
Kumar wrote: Dear Friends, I am using PostgreSQL 7.3.4 Server on RH Linux 7.2. I am trying to generate a dynamic query to fetch the next month interval. select now()+ interval'1 month'; -- This is working fine. I wanna dynamically assign the interval number. i,e --> select now()+ interval'n month'; What about: select now() + n * '1 month'::intervall; Regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] transaction processing after error in statement
Rod Taylor wrote: be recovered either. When committing a transaction the effects of all operations that did not fail will be made permanent. This is how transaction processing is described in the literature. I would be interested in reading that (URLs please) as I didn't see anything in the spec that was interesting on this topic. 4.8.5 from Framework (part 01) An SQL-transaction (transaction) is a sequence of executions of SQL-statements that is atomic with respect to recovery. That is to say: either the execution result is completely successful, or it has no effect on any SQL-schemas or SQL-data. Although i am not aware of the roots of this discussion but would like to comment at this point . When we work with sequences an aborted transaction does have a permanent effect on the last value of sequence. Is this behaviour not a violation of above defination of transaction ? Regds Mallah. The "execution result is completely successful" could certainly be used to back up PostgreSQLs choice to force a rollback. However, it doesn't differentiate between execution of what the user requested, and execution of recovery procedures on the successful user elements. Irregardless, I wish a commit on a failed transaction would throw an error -- END is good enough for Rollback or Commit. For PostgreSQL to implement this we need Savepoints or nested transactions internally since in many cases data is physically written in order to perform things like Foreign Key constraint checks. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] transaction processing after error in statement
> Although i am not aware of the roots of this discussion but would like > to > comment at this point . > > When we work with sequences an aborted transaction does have > a permanent effect on the last value of sequence. Is this behaviour > not a violation of above defination of transaction ? I believe you are correct, which is probably why Oracle offers serialized sequences for those who want them to be. Sequences have been explicitly documented as functioning the way they do as a concession for performance. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[SQL] Is there a more elegant way to write this query?...
Hi- I'm suffering from a performance problem, but when I look at my query, I'm not convinced that there isn't a better way to handle this in SQL. -So I'm seeking advice here before I go to the performance list. I have three tables- case, actor and actor_case_assignment. As the names imply, actor_case_assignment contains records that assign an actor to a case. Actors such as attorneys or judges may have many cases, while the average actor (we hope) only has one. What I'm trying to do is link these tables to get back a single row per actor that shows the actor's name, the number of cases that actor is assigned to, and if they only have one case, I want the number for that case. This means I have to do some grouping to get the case count, but I'm then forced to use an aggregate function like max on the other fields. I hope there's a better way. Any suggestions? Here's what I'm using right now: select actor.actor_id, max(actor.actor_full_name), max(case_data.case_public_id), max(case_data.case_id), count(case_data.case_id) as case_count from actor, actor_case_assignment, case_data where actor.actor_full_name_uppercase like upper('martin%') and actor.actor_id = actor_case_assignment.actor_id and case_data.case_id = actor_case_assignment.case_id group by actor.actor_id order by max(actor.actor_full_name), case_count desc, limit 1000; Thanks! -Nick - Nick Fankhauser [EMAIL PROTECTED] Phone 1.765.965.7363 Fax 1.765.962.9788 doxpop - Court records at your fingertips - http://www.doxpop.com/ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Is there a more elegant way to write this query?...
On Nov 10, 2003, at 1:02 PM, Nick Fankhauser wrote: Hi- I'm suffering from a performance problem, but when I look at my query, I'm not convinced that there isn't a better way to handle this in SQL. -So I'm seeking advice here before I go to the performance list. An explain analyze would help. What I'm trying to do is link these tables to get back a single row per actor that shows the actor's name, the number of cases that actor is assigned to, and if they only have one case, I want the number for that case. This means I have to do some grouping to get the case count, but I'm then forced to use an aggregate function like max on the other fields. I hope there's a better way. Any suggestions? How about: select actor.actor_full_name, actor.actor_id, s1.ctCases, s1.case_id, case_data.case_public_id from actor inner join ( select actor_id, count(*) as ctCases, max(case_id) as case_id from actor_case_assignment group by actor_id) as s1 on (actor.actor_id = s1.actor_id) left outer join case_data using (s1.case_id=case_data.case_id) limit 1000; If you don't need the public_id, then you don't even need to join in the case data table. eric ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] pg 7.4.rc1, Range query performance
Hi, I tried CLUSTER and it did improve performance, somewhat. The query against "clustered" table performs about five (5) times better than the same table but "non-clustered". However, even after that table was clustered, the difference in performance between single record query and range query is significant: table Test (see below) has 10M records single record - 31 ms and remains mostly constant as table grows range query returning 30 records - about 10 secs and grows together with the table Also, CLUSTER is locking the table (in our case this also means locking the database), so it may be impossible to use it in production on large tables (impossible in our case). It feels like I really have a problem here. Any ideas? Thanks P.S. For the future I would consider implementing "CREATE [CLUSTERED] INDEX" --- Bruce Momjian <[EMAIL PROTECTED]> wrote: > > Try CLUSTER --- that usually helps with index scans on ranges. > > --- > > ow wrote: > > Postgresql 7.4.rc1 on i686 pc linux compiled by gcc 2.06. > > All configuration settings are default. > > > > > > Hi, > > > > Trying to find a way to improve range query performance. > > > > The table Test has about 30 million records. > > > > -- DLong, Dtimestamp, Dint, etc are domains of the respective types. > > create table Test ( > > id DLong not null, > > a Dtimestamp null, > > b Dintnot null, > > c Dintnot null, > > d Dstring null, > > constraint PK_id primary key (id), > > constraint AK_abc unique (a, b, c) > > ); > > > > The following query retrieves a single record, it runs against AK index and > is > > extremely fast (30-150 ms) for the table of this size: > > > > -- returns result in 30-150 ms > > select * from Test > > where a = '2002-09-01' > > and b = 5 > > and c = 255 > > > > OTOH, the following range query that returns 30 records performs much > slower, > > about 33000 ms. The query is using AK index, as it should, but why does it > take > > so much longer to scan the index for the range of just 30 records? I see > that > > PG is hitting the disk very intensively for this query. Can the query be > > rewritten, etc to improve performance? Thanks > > > > select * from Test > > where a >= '2002-09-01' > > and a <= '2002-09-30' > > and b = 5 > > and c = 255 > > > > QUERY PLAN > > Index Scan using ak_abc on test (cost=0.00..106.27 rows=30 width=53) > (actual > > time=33.536..33200.998 rows=30 loops=1) > > Index Cond: (((a)::timestamp without time zone >= '2002-09-01 > > 00:00:00'::timestamp without time zone) AND ((a)::timestamp without time > zone > > <= '2002-09-30 00:00:00'::timestamp without time zone) AND ((b)::integer > > = 5) AND ((c) (..) > > Total runtime: 33201.219 ms > > > > > > > > > > > > > > > > > > __ > > Do you Yahoo!? > > Protect your identity with Yahoo! Mail AddressGuard > > http://antispam.yahoo.com/whatsnewfree > > > > ---(end of broadcast)--- > > TIP 6: Have you searched our list archives? > > > >http://archives.postgresql.org > > > > -- > Bruce Momjian| http://candle.pha.pa.us > [EMAIL PROTECTED] | (610) 359-1001 > + If your life is a hard drive, | 13 Roberts Road > + Christ can be your backup.| Newtown Square, Pennsylvania 19073 __ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] pg 7.4.rc1, Range query performance
--- Stephan Szabo <[EMAIL PROTECTED]> wrote: > I'm not sure that AK_abc is the best index for check a range on a and > single values on b and c. I'd think that something like an index > on (b,c,a) would probably be better for this purpose (without doing any > testing ;) ). That would not work for us since most of the time users are working with most recent data, hence column "a" is the first in the index. Thanks __ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] pg 7.4.rc1, Range query performance
ow wrote: > Hi, > > I tried CLUSTER and it did improve performance, somewhat. The query against > "clustered" table performs about five (5) times better than the same table but > "non-clustered". However, even after that table was clustered, the difference > in performance between single record query and range query is significant: > > table Test (see below) has 10M records > single record - 31 ms and remains mostly constant as table grows > range query returning 30 records - about 10 secs and grows together with the > table > > Also, CLUSTER is locking the table (in our case this also means locking the > database), so it may be impossible to use it in production on large tables > (impossible in our case). > > It feels like I really have a problem here. Any ideas? Thanks > > P.S. For the future I would consider implementing "CREATE [CLUSTERED] INDEX" Strange 30 records takes 30x the time than one record. Can you run ANALYZE and send us an EXPLAIN of the query to make sure it hasn't changed? -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] pg 7.4.rc1, Range query performance
On Mon, 10 Nov 2003, ow wrote: > --- Stephan Szabo <[EMAIL PROTECTED]> wrote: > > I'm not sure that AK_abc is the best index for check a range on a and > > single values on b and c. I'd think that something like an index > > on (b,c,a) would probably be better for this purpose (without doing any > > testing ;) ). > > That would not work for us since most of the time users are working with most > recent data, hence column "a" is the first in the index. Are you doing alot of queries where you're only searching on a? If so, then you may want to consider a second, non-unique index to speed up this sort of query. With a range query on a and fixed values on b and c, you're really not utilizing an index in that order efficiently. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] Query Problem
Hi there! If I've a table like below. kk kj pngk vote 01 02 c 10 01 02 b 5 How do I make a query so that I can get a result like this? kk kj pngk vote 01 02 c,b 15 Any help pretty much appreciated. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] pg 7.4.rc1, Range query performance
--- Bruce Momjian <[EMAIL PROTECTED]> wrote: > Strange 30 records takes 30x the time than one record. Can you run > ANALYZE and send us an EXPLAIN of the query to make sure it hasn't > changed? > explain analyze select * from Test where a >= '2002-06-18' and a <= '2002-07-18' and b = 5 and c = 255 QUERY PLAN Index Scan using ak_abc on test (cost=0.00..121.23 rows=34 width=53) (actual time=18.060..10726.387 rows=31 loops=1) Index Cond: (((a)::timestamp without time zone >= '2002-06-18 00:00:00'::timestamp without time zone) AND ((a)::timestamp without time zone <= '2002-07-18 00:00:00'::timestamp without time zone) AND ((b)::integer = 5) AND ((c) (..) Total runtime: 10726.663 ms Thanks __ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] Query Problem
Abdul Wahab Dahalan wrote: If I've a table like below. kk kjpngkvote 01 02 c 10 01 02 b 5 How do I make a query so that I can get a result like this? kk kjpngkvote 01 02c,b 15 create or replace function accum_text(text, text) returns text as 'select case when $1 = then $2 else $1 || '','' || $2 end' language sql; CREATE AGGREGATE concat(BASETYPE = text, SFUNC = accum_text, STYPE = text, INITCOND = ''); create table t(kk text, kj text, pngk text, vote int); insert into t values('01','02','c',10); insert into t values('01','02','b',5); regression=# select kk, kj, concat(pngk), sum(vote) from t group by kk, kj; kk | kj | concat | sum +++- 01 | 02 | c,b| 15 (1 row) HTH, Joe ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] pg 7.4.rc1, Range query performance
ow wrote: > --- Bruce Momjian <[EMAIL PROTECTED]> wrote: > > Strange 30 records takes 30x the time than one record. Can you run > > ANALYZE and send us an EXPLAIN of the query to make sure it hasn't > > changed? > > > > explain analyze select * from Test > where a >= '2002-06-18' > and a <= '2002-07-18' > and b = 5 > and c = 255 > > QUERY PLAN > Index Scan using ak_abc on test (cost=0.00..121.23 rows=34 width=53) (actual > time=18.060..10726.387 rows=31 loops=1) > Index Cond: (((a)::timestamp without time zone >= '2002-06-18 > 00:00:00'::timestamp without time zone) AND ((a)::timestamp without time zone > <= '2002-07-18 00:00:00'::timestamp without time zone) AND ((b)::integer = 5) > AND ((c) (..) > Total runtime: 10726.663 ms OK, I see now. You must have a lot of rows from '2002-06-18' to '2002-07-18', but only 33 with the b,c conditions --- not much we can do to speed this up because the condition on 'a' isn't restrictive enough --- not sure if b or c is either. It is all those lookups to find the rows that match a, then b/c that is taking the time. In fact, it now make sense that it takes 30x time because all the time is spent traversing the index looking for match #1, then match #2, etc. We would do this quickly if there were lots of rows matching a specific 'a' value, e.g. > explain analyze select * from Test > where a = '2002-06-18' <--- > and b = 5 > and c = 255 The index/cluster is grouping the rows, but the grouping is by timestamp value, not by range >= '2002-06-18' and <= '2002-07-18'. Even though you have index a,b,c, it really is only using the index on 'a' because the index on b,c only happens when you have multiple duplicate 'a' values, but in this case you have an entire months worth. The only quick way would be to create a functional index on 'a', and cluster on that: create index ii on x (date_part("month", a), b,c) or something like that. You can't actually index on a function and then ordinary columns so you would need a pretty fancy function in plpgsql that converted the a,b,c value into a nice text string and then index on that. Then if you used that function call in your WHERE clause, the index would be used and it would be very fast because all your 'a' values would be the same, and it could then jump to b and c quickly. Sorry there isn't a simple solution. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] pg 7.4.rc1, Range query performance
Bruce Momjian <[EMAIL PROTECTED]> writes: > Sorry there isn't a simple solution. But there is: make an index with the column order (b,c,a). regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] pg 7.4.rc1, Range query performance
Tom Lane wrote: > Bruce Momjian <[EMAIL PROTECTED]> writes: > > Sorry there isn't a simple solution. > > But there is: make an index with the column order (b,c,a). Oh, yea, right. If he puts the columns he is doing a straight equals comparison first, the 'a' comparison will work fine. Great. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org