[SQL] Generating dates prior to generate_series
I'm using PostgreSQL version 7.3.2, and generate_series() is not available, so this is a function to generate a series dates. The function goes backwards if the second argument is less than the first one. Check the two select statements at the end. Best Regards, Roger Tannous. CREATE FUNCTION date_range(VARCHAR, VARCHAR) RETURNS SETOF DATE AS ' DECLARE dateStart ALIAS FOR $1; dateEnd ALIAS FOR $2; forwardSteps BOOLEAN := true; tmpDate DATE; BEGIN IF (to_date(dateStart, ''-mm-dd'') > to_date(dateEnd, ''-mm-dd'')) THEN forwardSteps := false; END IF; tmpDate := to_date(dateStart, ''-mm-dd''); WHILE ((forwardSteps AND tmpDate <= to_date(dateEnd, ''-mm-dd'')) OR (NOT forwardSteps AND tmpDate >= to_date(dateEnd, ''-mm-dd''))) LOOP RETURN NEXT tmpDate; IF forwardSteps THEN tmpDate := tmpDate + interval ''1 day''; ELSE tmpDate := tmpDate - interval ''1 day''; END IF; END LOOP; RETURN; END; ' LANGUAGE 'plpgsql'; select * FROM date_range('2007-01-03', '2007-03-20'); select * FROM date_range('2007-04-03', '2007-03-20'); Looking for earth-friendly autos? Browse Top Cars by "Green Rating" at Yahoo! Autos' Green Center. http://autos.yahoo.com/green_center/ ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
[SQL] CPU statistics
Hi list, It is possible to retrieve information about the server hardware via postgreSQL ? Regards, -- Ezequias Rodrigues da Rocha ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] plpgsql function question
If you use a plpgsql function to select the row you want to validate, it will make life much easier. Something like ... $$ declare my_row a_row_type; is_ok integer; begin select into my_row * from a_row_type where is_ok := my_a_validate(my_row); return is_ok; $$ ... On Apr 4, 2007, at 1:01 AM, A. Kretschmer wrote: Because your function expects one parameter of your new type, you have to CAST your data into this type: test=# select * from my_a((1, 'foo', current_date)::a); my_a -- 1 (1 row) John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
[SQL] Setting high performance on huge server
Hi list, Could someone give me a little help with my settings ? How do I increase my performance knowing that my server is a huge server. Here some parameters I suppose are important on this settings: "shared_buffers";"8000kB";"Sets the number of shared memory buffers used by the server." "max_connections";"100";"Sets the maximum number of concurrent connections." "work_mem";"1MB";"Sets the maximum memory to be used for query workspaces." -- Ezequias Rodrigues da Rocha ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Setting high performance on huge server
How do I increase my performance knowing that my server is a huge server. Here some parameters I suppose are important on this settings: "shared_buffers";"8000kB";"Sets the number of shared memory buffers used by the server." "max_connections";"100";"Sets the maximum number of concurrent connections." "work_mem";"1MB";"Sets the maximum memory to be used for query workspaces." Do you have some hardware-specifications, ie. how much ram, cpu-type etc.? regards Claus ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Moving a simple function to pl/pgsql (Novice question)
--- Paul Lambert <[EMAIL PROTECTED]> wrote: > The purpose being when a row in a table in one database is updated, it > will copy (or replicate I guess) the record into a different table into > another database in the same server. (deleting said record first if it > already exists) > > What is the best way to do this within Postgres? In postgresql, there are two parts to implementing a trigger; the trigger function and the actual trigger on the table that calls this function. Here is a good example: http://www.postgresql.org/docs/8.2/interactive/plpgsql-trigger.html#PLPGSQL-TRIGGER-AUDIT-EXAMPLE One other point to be aware of, Postgresql does not allow your to reference other databases in the DB-cluster directly. It is possible but you need to add a contrib module called dblink: http://pgfoundry.org/projects/snapshot/ Hope this helps, Regards, Richard Broersma Jr. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Solved - [SQL] best way: diary functions.
My solution that works is: select date_range as rdate, case when rsgsid is not null then 2 when aid is not null then 1 else 0 end as status from date_range('2007-04-01','2007-04-30') left join availability a on a.asid = 1 and (date_range,date_range) overlaps (a.asdate,a.afdate) left join roster_staff r on r.rsdate = date_range; Can anyone see any problems or improvements to this? -- Gary Stainburn This email does not contain private or confidential material as it may be snooped on by interested government parties for unknown and undisclosed purposes - Regulation of Investigatory Powers Act, 2000 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] CPU statistics
Am Mittwoch, 4. April 2007 14:36 schrieb [EMAIL PROTECTED]: > It is possible to retrieve information about the server hardware via > postgreSQL ? You'd have to write your own function. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] CPU statistics
On 4/4/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: It is possible to retrieve information about the server hardware via postgreSQL ? Ezequias, Please read the manual and use Google. You're still asking two or three questions per day which could easily be answered in already published documentation. In regard to your other question about performance, see http://www.varlena.com/GeneralBits/33.html (found with a single Google search of: tune postgresql performance) -- Jonah H. Harris, Software Architect | phone: 732.331.1324 EnterpriseDB Corporation| fax: 732.331.1301 33 Wood Ave S, 3rd Floor| [EMAIL PROTECTED] Iselin, New Jersey 08830| http://www.enterprisedb.com/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Moving a simple function to pl/pgsql (Novice question)
Richard Broersma Jr wrote: --- Paul Lambert <[EMAIL PROTECTED]> wrote: The purpose being when a row in a table in one database is updated, it will copy (or replicate I guess) the record into a different table into another database in the same server. (deleting said record first if it already exists) What is the best way to do this within Postgres? In postgresql, there are two parts to implementing a trigger; the trigger function and the actual trigger on the table that calls this function. Here is a good example: http://www.postgresql.org/docs/8.2/interactive/plpgsql-trigger.html#PLPGSQL-TRIGGER-AUDIT-EXAMPLE One other point to be aware of, Postgresql does not allow your to reference other databases in the DB-cluster directly. It is possible but you need to add a contrib module called dblink: http://pgfoundry.org/projects/snapshot/ Hope this helps, Regards, Richard Broersma Jr. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq Perfect, thanks Richard. Regards, Paul. -- Paul Lambert Database Administrator AutoLedgers ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
[SQL] slow query
Hi all I ran a simple query like the one displayed below and it takes a lot of time to execute on this table. This table has 48 million recordsand i worry about this table a lot coz i need to perform join on this table with some other table having around 13 million recordsI've tried vacuuming this table many time but dint help me much...is there any other trick to speed up this tableonce i tried deleting and restoring a table and it worked perfectly fine for a small table of size upto 5-6 million records sm=> explain analyze select * from ma limit 10; QUERY PLAN - Limit (cost=0.00..0.45 rows=10 width=76) (actual time=21985.292..22204.340rows=10 loops=1) -> Seq Scan on ma (cost=0.00..2181956.92 rows=48235392 width=76) (actual time=21985.285..22204.308 rows=10 loops=1) Total runtime: 22204.476 ms (3 rows) -- Thanks, Sumeet
Re: [SQL] Very slow DELETE on 4000 rows of 55000 row table
Tom Lane wrote: > Bryce Nesbitt <[EMAIL PROTECTED]> writes: > >> I've got a DELETE FROM that seems to run forever, pegging the CPU at >> 100%. I can't figure out why it's slow. Any clues? >> > > Unindexed foreign key constraints pointing to this table, perhaps? > EXPLAIN ANALYZE would give a great deal more clue than plain EXPLAIN. > > regards, tom lane > Hmm, excellent point. There is a column with no index that points to the table in question: Indexes: "eg_order_line_pkey" PRIMARY KEY, btree (order_line_id) "ixf8331222783867cc" btree (order_id) Foreign-key constraints: "fkf8331222783867cc" FOREIGN KEY (order_id) REFERENCES eg_order(order_id) "fkf83312228edf278d" FOREIGN KEY (invoice_id) REFERENCES eg_invoice(invoice_id) "order_line_to_cso" FOREIGN KEY (cso_id) REFERENCES eg_cso(cso_id) But I DELETE all conflicting those rows prior to the slow DELETE, just so the FK check is never hit. Should I be looking at subverting the FK check mechanism somehow? The necessary index would be huge, and relevant only on this particular operation which happens every few months, if that. ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] Very slow DELETE on 4000 rows of 55000 row table
> But I DELETE all conflicting those rows prior to the slow DELETE, just > so the FK check is never hit. Should I be looking at subverting the FK > check mechanism somehow? The necessary index would be huge, and > relevant only on this particular operation which happens every few > months, if that. true, but without an index, it still has to scan the table just to be sure. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] slow query
am Wed, dem 04.04.2007, um 23:17:54 -0400 mailte Sumeet folgendes: > sm=> explain analyze select * from ma limit 10; > QUERY > PLAN > - > Limit (cost=0.00..0.45 rows=10 width=76) (actual time=21985.292..22204.340 > rows=10 loops=1) >-> Seq Scan on ma (cost=0.00..2181956.92 rows=48235392 width=76) (actual > time=21985.285..22204.308 rows=10 loops=1) > Total runtime: 22204.476 ms > (3 rows) which version? I have 8.1 and on a table with 1.9 million records i got: scholl=# explain analyse select * from bde_meldungen limit 10; QUERY PLAN Limit (cost=0.00..0.34 rows=10 width=157) (actual time=19.570..19.660 rows=10 loops=1) -> Seq Scan on bde_meldungen (cost=0.00..65748.33 rows=1957933 width=157) (actual time=19.565..19.587 rows=10 loops=1) Total runtime: 19.845 ms (3 rows) If I remember correctly, since 8.0 or 8.1 we have a massive improvement with LIMIT. Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Very slow DELETE on 4000 rows of 55000 row table
Richard Broersma Jr <[EMAIL PROTECTED]> writes: >> But I DELETE all conflicting those rows prior to the slow DELETE, just >> so the FK check is never hit. Should I be looking at subverting the FK >> check mechanism somehow? The necessary index would be huge, and >> relevant only on this particular operation which happens every few >> months, if that. > true, but without an index, it still has to scan the table just to be sure. If this is only a once-in-awhile thing, maybe you could build the index, do the deletes, drop the index ... regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] slow query
"A. Kretschmer" <[EMAIL PROTECTED]> writes: > am Wed, dem 04.04.2007, um 23:17:54 -0400 mailte Sumeet folgendes: >> sm=> explain analyze select * from ma limit 10; >> QUERY >> PLAN >> - >> Limit (cost=0.00..0.45 rows=10 width=76) (actual time=21985.292..22204.340 >> rows=10 loops=1) >> -> Seq Scan on ma (cost=0.00..2181956.92 rows=48235392 width=76) (actual >> time=21985.285..22204.308 rows=10 loops=1) >> Total runtime: 22204.476 ms >> (3 rows) > which version? I'm betting the problem is poor vacuuming practice leading to lots of dead space. There's no way it takes 22 sec to read 10 rows if the table is reasonably dense. regards, tom lane ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] slow query
am Thu, dem 05.04.2007, um 1:27:25 -0400 mailte Tom Lane folgendes: > "A. Kretschmer" <[EMAIL PROTECTED]> writes: > > am Wed, dem 04.04.2007, um 23:17:54 -0400 mailte Sumeet folgendes: > >> sm=> explain analyze select * from ma limit 10; > >> QUERY > >> PLAN > >> - > >> Limit (cost=0.00..0.45 rows=10 width=76) (actual time=21985.292..22204.340 > >> rows=10 loops=1) > >> -> Seq Scan on ma (cost=0.00..2181956.92 rows=48235392 width=76) (actual > >> time=21985.285..22204.308 rows=10 loops=1) > >> Total runtime: 22204.476 ms > >> (3 rows) > > > which version? > > I'm betting the problem is poor vacuuming practice leading to lots of > dead space. There's no way it takes 22 sec to read 10 rows if the > table is reasonably dense. This was my first thought, but: ,[ Quote ] | I've tried | vacuuming this table many time ` Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly