[PERFORM] DB2 feature
Hi I see this article about DB2 http://www-106.ibm.com/developerworks/db2/library/techarticle/dm -0411rielau/?ca=dgr-lnxw06SQL-Speed The listing 2 example: 1 SELECT D_TAX, D_NEXT_O_ID 2 INTO :dist_tax , :next_o_id 3 FROM OLD TABLE ( UPDATE DISTRICT 4 SET D_NEXT_O_ID = D_NEXT_O_ID + 1 5 WHERE D_W_ID = :w_id 6 AND D_ID = :d_id 7 ) AS OT I am not a expert in Rule System. But I ad a look to http://www.postgresql.org/docs/7.4/static/rules-update.html And it seems possible in PostgreSQL to build non standard SQL query to do thing like listing 2. I would like to know from an expert of PostgreSQL if such query is really a new stuff to DB2 as the artcile states ? or if PostgreSQL has already the same type of power ? Cordialement, Jean-Gérard Pailloncy ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] INSERT RULE
I try to do: CREATE RULE ndicti AS ON INSERT TO ndict DO INSTEAD INSERT INTO 'ndict_' || (NEW.word_id 255) VALUES( NEW.url_id, NEW.word_id, NEW.intag); I got an error on 'ndict_' . I did not found the right syntax. In fact I discover that SELECT * FROM / INSERT INTO table doesn't accept function that returns the name of the table as table, but only function that returns rows I'm dead. Does this feature, is possible or plan ? Is there a trick to do it ? Cordialement, Jean-Gérard Pailloncy ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[PERFORM] INSERT RULE
Hi, I test a configuration where one table is divided in 256 sub-table. And I use a RULE to offer a single view to the data. For INSERT I have create 256 rules like: CREATE RULE ndicti_000 AS ON INSERT TO ndict WHERE (NEW.word_id 255) = 000 DO INSTEAD INSERT INTO ndict_000 VALUES( NEW.url_id, 000, NEW.intag); CREATE RULE ndicti_001 AS ON INSERT TO ndict WHERE (NEW.word_id 255) = 001 DO INSTEAD INSERT INTO ndict_001 VALUES( NEW.url_id, 001, NEW.intag); And that works, a bit slow. I try to do: CREATE RULE ndicti AS ON INSERT TO ndict DO INSTEAD INSERT INTO 'ndict_' || (NEW.word_id 255) VALUES( NEW.url_id, NEW.word_id, NEW.intag); I got an error on 'ndict_' . I did not found the right syntax. Any help is welcomed. Cordialement, Jean-Gérard Pailloncy ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PERFORM] Fwd: FreeBSD, PostgreSQL, semwait and sbwait!
Hello, We're having a substantial problem with our FreeBSD 5.2 database server running PostgreSQL - it's getting a lot of traffic (figure about 3,000 queries per second), but queries are slow, and it's seemingly waiting on other things than CPU time Could this be a 5.2 performance issue ? In spite of certain areas where the 5.x series performance is known to be much better than 4.x (e.g networking), this may not be manifested in practice for a complete application. (e.g. I am still running 4.9 as it outperformed 5.1 vastly for a ~100 database sessions running queries - note that I have not tried out 5.2, so am happy to be corrected on this) I found the same problem. I use OpenBSD 3.3, On Pentium 2,4 GHz with 1 Gb RAM, RAID 10. With PostgreSQL 7.4.1 with 32 Kb bock's size (to match ffs and raid block's size) With pg_autovacuum daemon from Pg 7.5. I run a web indexer. sd0 raid-1 with system pg-log and indexer-log sd1 raid-10 with pg-data and indexer-data The sd1 disk achives between 10 and 40 Mb/s on normal operation. When I get semwait in top, system waits ;-) Not much disk activity. Not much log in pg or indexer. Just wait What can I do ? sudo top -s1 -S -I load averages: 4.45, 4.45, 3.86 11:25:52 97 processes: 1 running, 96 idle CPU states: 2.3% user, 0.0% nice, 3.8% system, 0.8% interrupt, 93.1% idle Memory: Real: 473M/803M act/tot Free: 201M Swap: 0K/3953M used/tot PID USERNAME PRI NICE SIZE RES STATE WAIT TIMECPU COMMAND 2143 postgres -50 4008K 37M sleep biowai 1:02 1.81% postgres 28662 postgres 140 4060K 37M sleep semwai 0:59 1.17% postgres 25794 postgres 140 4072K 37M sleep semwai 1:30 0.93% postgres 23271 postgres -50 4060K 37M sleep biowai 1:13 0.29% postgres 14619 root 280 276K 844K run -0:01 0.00% top vmstat -w1 sd0 sd1 r b wavmfre flt re pi po fr sr sd0 sd1 insy cs us sy id 0 4 0 527412 36288 1850 0 0 0 0 0 26 72 368 8190 588 0 4 96 0 4 0 527420 36288 1856 0 0 0 0 0 0 86 356 8653 620 2 2 97 0 4 0 527432 36280 1853 0 0 0 0 0 0 54 321 8318 458 1 3 96 0 4 0 527436 36248 1864 0 0 0 0 0 0 77 358 8417 539 1 2 97 0 4 0 522828 40932 2133 0 0 0 0 0 7 70 412 15665 724 2 3 95 0 4 0 522896 40872 1891 0 0 0 0 0 15 72 340 9656 727 3 5 92 0 4 0 522900 40872 1841 0 0 0 0 0 0 69 322 8308 536 1 2 98 0 4 0 522920 40860 1846 0 0 0 0 0 1 69 327 8023 520 2 2 97 0 4 0 522944 40848 1849 0 0 0 0 0 4 76 336 8035 567 1 2 97 0 4 0 522960 40848 1843 0 0 0 0 0 0 77 331 14669 587 3 2 95 0 4 0 522976 40836 1848 0 0 0 0 0 4 81 339 8384 581 1 2 97 0 4 0 522980 40836 1841 0 0 0 0 0 3 65 320 8068 502 1 4 95 0 4 0 523000 40824 1848 0 0 0 0 0 14 74 341 8226 564 3 2 95 0 4 0 523020 40812 1844 0 0 0 0 0 0 67 317 7606 530 2 1 97 1 4 0 523052 40796 1661 0 0 0 0 0 0 68 315 11603 493 2 2 97 1 4 0 523056 40800 233 0 0 0 0 0 12 87 341 12550 609 2 2 96 0 4 0 523076 40788 1845 0 0 0 0 0 0 82 334 12457 626 2 2 96 0 4 0 523100 40776 1851 0 0 0 0 0 0 91 345 10914 623 2 3 95 0 4 0 523120 40764 1845 0 0 0 0 0 0 92 343 19213 596 1 5 95 0 4 0 523136 40752 1845 0 0 0 0 0 0 97 349 8659 605 2 2 96 0 4 0 523144 40748 4501 0 0 0 0 0 32 78 385 15632 934 25 12 64 0 4 0 523168 40728 1853 0 0 0 0 0 3 74 335 3965 531 0 2 98 ps -Upostgresql -Ostart | grep -v idle PID STARTED TT STAT TIME COMMAND 8267 10:53AM ?? Is 0:00.28 /usr/local/bin/pg_autovacuum -D -L /var/pgsql/autovacuum 23271 10:54AM ?? I 1:13.56 postmaster: dps dps 127.0.0.1 SELECT (postgres) 28662 10:55AM ?? I 0:59.98 postmaster: dps dps 127.0.0.1 SELECT (postgres) 25794 10:56AM ?? D 1:30.48 postmaster: dps dps 127.0.0.1 SELECT (postgres) 2143 11:02AM ?? D 1:02.06 postmaster: dps dps 127.0.0.1 DELETE (postgres) 25904 10:52AM C0- I 0:00.07 /usr/local/bin/postmaster -D /var/pgsql (postgres) 10908 10:52AM C0- I 0:05.96 postmaster: stats collector process (postgres) 7045 10:52AM C0- I 0:05.19 postmaster: stats buffer process (postgres) grep -v -E '^#' /var/pgsql/postgresql.conf tcpip_socket = true max_connections = 100 shared_buffers = 1024 # 32KB max_fsm_pages = 100 # min max_fsm_relations*16, 6 bytes each max_fsm_relations = 200 # min 100, ~50 bytes each wal_buffers = 32# min 4, 8KB each checkpoint_segments = 16# in logfile segments, min 1, 16MB each commit_delay = 100 # range 0-10, in microseconds
Re: [PERFORM] 225 times slower
The planner is guessing that scanning in rec_id order will produce a matching row fairly quickly (sooner than selecting all the matching rows and sorting them would do). It's wrong in this case, but I'm not sure it could do better without very detailed cross-column statistics. Am I right to guess that the rows that match the WHERE clause are not evenly distributed in the rec_id order, but rather there are no such rows till you get well up in the ordering? I must agree that the data are not evenly distributed For table url: count 271.395 min rec_id 1 max rec_id 3.386.962 dps= select * from url where crc32=419903683; count 852 min rec_id264.374 max rec_id 2.392.046 I do dps= select ctid, rec_id from url where crc32=419903683 order by crc32,rec_id; And then in a text edit extract the page_id from ctid and there is 409 distinct pages for the 852 rows. There is 4592 pages for the tables url. dps= select (rec_id/25), count(*) from url where crc32=419903683 group by rec_id/25 having count(*)4 order by count(*) desc; ?column? | count --+--- 30289 |25 11875 |24 11874 |24 11876 |24 28154 |23 26164 |21 26163 |21 55736 |21 40410 |20 47459 |20 30290 |20 28152 |20 26162 |19 30291 |19 37226 |19 60357 |18 28150 |18 12723 |17 40413 |17 40412 |16 33167 |15 40415 |15 12961 |15 40414 |15 28151 |14 63961 |14 26165 |13 11873 |13 63960 |12 37225 |12 37224 |12 20088 |11 30288 |11 91450 |11 20087 |11 26892 |10 47458 |10 40411 |10 91451 |10 12722 |10 28153 | 9 43488 | 9 60358 | 7 60356 | 7 11877 | 7 33168 | 6 91448 | 6 26161 | 6 40409 | 5 28155 | 5 28318 | 5 30292 | 5 26891 | 5 95666 | 5 (54 rows) An other question, with VACUUM VERBOSE ANALYZE, I see: INFO: url: removed 568107 row versions in 4592 pages DETAIL: CPU 0.51s/1.17u sec elapsed 174.74 sec. And I run pg_autovacuum. Does the big number (568107) of removed row indicates I should set a higher max_fsm_pages ? grep fsm /var/pgsql/postgresql.conf max_fsm_pages = 6 # min max_fsm_relations*16, 6 bytes each max_fsm_relations = 200 # min 100, ~50 bytes each dps= VACUUM VERBOSE ANALYSE url; INFO: vacuuming public.url INFO: index url_crc now contains 211851 row versions in 218 pages DETAIL: 129292 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.00s/1.38u sec elapsed 5.71 sec. INFO: index url_seed now contains 272286 row versions in 644 pages DETAIL: 568107 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.10s/2.96u sec elapsed 13.06 sec. INFO: index url_referrer now contains 272292 row versions in 603 pages DETAIL: 568107 index row versions were removed. 4 index pages have been deleted, 0 are currently reusable. CPU 0.10s/2.98u sec elapsed 22.30 sec. INFO: index url_next_index_time now contains 272292 row versions in 684 pages DETAIL: 568107 index row versions were removed. 42 index pages have been deleted, 0 are currently reusable. CPU 0.07s/1.80u sec elapsed 9.50 sec. INFO: index url_status now contains 272298 row versions in 638 pages DETAIL: 568107 index row versions were removed. 12 index pages have been deleted, 0 are currently reusable. CPU 0.03s/2.18u sec elapsed 13.66 sec. INFO: index url_bad_since_time now contains 272317 row versions in 611 pages DETAIL: 568107 index row versions were removed. 4 index pages have been deleted, 0 are currently reusable. CPU 0.07s/2.40u sec elapsed 10.99 sec. INFO: index url_hops now contains 272317 row versions in 637 pages DETAIL: 568107 index row versions were removed. 5 index pages have been deleted, 0 are currently reusable. CPU 0.04s/2.24u sec elapsed 12.46 sec. INFO: index url_siteid now contains 272321 row versions in 653 pages DETAIL: 568107 index row versions were removed. 13 index pages have been deleted, 0 are currently reusable. CPU 0.14s/2.05u sec elapsed 11.63 sec. INFO: index url_serverid now contains 272321 row versions in 654 pages DETAIL: 568107 index row versions were removed. 8 index pages have been deleted, 0 are currently reusable. CPU 0.10s/2.27u sec elapsed 11.45 sec. INFO: index url_url now contains 272065 row versions in 1892 pages DETAIL: 193884 index row versions were removed. 5 index pages have been deleted, 0 are currently reusable. CPU 0.39s/1.50u sec elapsed 36.99 sec. INFO: index url_last_mod_time now contains 272071 row versions in 317 pages DETAIL: 193884 index row versions were removed. 7 index pages have been deleted, 0 are currently reusable. CPU 0.03s/1.38u sec
Re: [PERFORM] 225 times slower
Hi, I apologize for the mistake. So, I dump the database, I reload it then VACUUM ANALYZE. For each statement: I then quit postgres, start it, execute one command, then quit. Le 14 avr. 04, à 14:39, Pailloncy Jean-Gérard a écrit : dps=# explain analyze SELECT rec_id FROM url WHERE crc32!=0 AND crc32=764518963 AND status IN (200,304,206) ORDER BY rec_id LIMIT 1; QUERY PLAN Limit (cost=169.79..169.79 rows=1 width=4) (actual time=502.397..502.398 rows=1 loops=1) - Sort (cost=169.79..169.86 rows=30 width=4) (actual time=502.393..502.393 rows=1 loops=1) Sort Key: rec_id - Index Scan using url_crc on url (cost=0.00..169.05 rows=30 width=4) (actual time=43.545..490.895 rows=56 loops=1) Index Cond: (crc32 = 764518963) Filter: ((crc32 0) AND ((status = 200) OR (status = 304) OR (status = 206))) Total runtime: 502.520 ms (7 rows) dps=# \q dps=# explain analyze SELECT rec_id FROM url WHERE crc32!=0 AND crc32=764518963 AND status IN (200,304,206) ORDER BY crc32,rec_id LIMIT 1; QUERY PLAN - Limit (cost=169.79..169.79 rows=1 width=8) (actual time=5.893..5.894 rows=1 loops=1) - Sort (cost=169.79..169.86 rows=30 width=8) (actual time=5.889..5.889 rows=1 loops=1) Sort Key: crc32, rec_id - Index Scan using url_crc on url (cost=0.00..169.05 rows=30 width=8) (actual time=0.445..5.430 rows=56 loops=1) Index Cond: (crc32 = 764518963) Filter: ((crc32 0) AND ((status = 200) OR (status = 304) OR (status = 206))) Total runtime: 6.020 ms (7 rows) dps=# \q dps=# explain analyze SELECT rec_id FROM url WHERE crc32!=0 AND crc32=419903683 AND status IN (200,304,206) ORDER BY rec_id LIMIT 1; QUERY PLAN -- Limit (cost=0.00..27.95 rows=1 width=4) (actual time=11021.875..11021.876 rows=1 loops=1) - Index Scan using url_pkey on url (cost=0.00..11625.49 rows=416 width=4) (actual time=11021.868..11021.868 rows=1 loops=1) Filter: ((crc32 0) AND (crc32 = 419903683) AND ((status = 200) OR (status = 304) OR (status = 206))) Total runtime: 11021.986 ms (4 rows) dps=# \q dps=# explain analyze SELECT rec_id FROM url WHERE crc32!=0 AND crc32=419903683 AND status IN (200,304,206) ORDER BY crc32,rec_id LIMIT 1; QUERY PLAN - Limit (cost=2000.41..2000.41 rows=1 width=8) (actual time=48.503..48.504 rows=1 loops=1) - Sort (cost=2000.41..2001.45 rows=416 width=8) (actual time=48.499..48.499 rows=1 loops=1) Sort Key: crc32, rec_id - Index Scan using url_crc on url (cost=0.00..1982.31 rows=416 width=8) (actual time=4.848..45.452 rows=796 loops=1) Index Cond: (crc32 = 419903683) Filter: ((crc32 0) AND ((status = 200) OR (status = 304) OR (status = 206))) Total runtime: 48.656 ms (7 rows) dps=# \q So, with all fresh data, everything rebuild from scratch, on a backend that will done one and only one query, the results is strange. Why adding an ORDER BY clause on a column with one value speed up the stuff 502ms to 6ms ? Why when crc32=419903683, which is one of the most often used value in the table, the query planner chose a plan so bad (225 times slower) ? Cordialement, Jean-Gérard Pailloncy ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] Index Backward Scan fast / Index Scan slow !
Hm, this is odd. That says you've got 349519 live index entries in only 463 actively-used index pages, or an average of 754 per page, which AFAICS could not fit in an 8K page. Are you using a nondefault value of BLCKSZ? If so what? Sorry, I forgot to specify I use BLCKSZ of 32768, the same blokck's size for newfs, the same for RAID slice's size. I test the drive sometimes ago, and found a speed win if the slice size the disk block size and the read block size was the same. I do not think that a different BLCKSZ should exhibit a slowdown as the one I found. If you *are* using default BLCKSZ then this index must be corrupt, and what you probably need to do is REINDEX it. But before you do that, could you send me a copy of the index file? Do you want the index file now, or may I try something before? Cordialement, Jean-Gérard Pailloncy ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[PERFORM] Index Backward Scan fast / Index Scan slow !
Hi, I test many times the foolowing query. dps=# explain analyze select next_index_time from url order by next_index_time desc limit 1; QUERY PLAN Limit (cost=0.00..2.62 rows=1 width=4) (actual time=56.615..56.616 rows=1 loops=1) - Index Scan Backward using url_next_index_time on url (cost=0.00..768529.55 rows=293588 width=4) (actual time=56.610..56.610 rows=1 loops=1) Total runtime: 56.669 ms (3 rows) dps=# explain analyze select next_index_time from url order by next_index_time asc limit 1; QUERY PLAN - Limit (cost=0.00..2.62 rows=1 width=4) (actual time=94879.636..94879.637 rows=1 loops=1) - Index Scan using url_next_index_time on url (cost=0.00..768529.55 rows=293588 width=4) (actual time=94879.631..94879.631 rows=1 loops=1) Total runtime: 94879.688 ms (3 rows) How to optimize the last query ? (~ 2000 times slower than the first one) I suppose there is some odd distribution of data in the index ? Is the solution to reindex data ? Cordialement, Jean-Gérard Pailloncy ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] Index Backward Scan fast / Index Scan slow !
Hi, How to optimize the last query ? (~ 2000 times slower than the first one) I suppose there is some odd distribution of data in the index ? Looks to me like a whole lot of dead rows at the left end of the index. Have you VACUUMed this table lately? From pg_autovacuum: [2004-04-10 05:45:39 AM] Performing: ANALYZE public.url [2004-04-10 11:13:25 AM] Performing: ANALYZE public.url [2004-04-10 03:12:14 PM] Performing: VACUUM ANALYZE public.url [2004-04-11 04:58:29 AM] Performing: ANALYZE public.url [2004-04-11 03:48:25 PM] Performing: ANALYZE public.url [2004-04-11 09:21:31 PM] Performing: ANALYZE public.url [2004-04-12 03:24:06 AM] Performing: ANALYZE public.url [2004-04-12 07:20:08 AM] Performing: VACUUM ANALYZE public.url It would be interesting to see what VACUUM VERBOSE has to say about it. dps=# VACUUM VERBOSE url; INFO: vacuuming public.url INFO: index url_pkey now contains 348972 row versions in 2344 pages DETAIL: 229515 index row versions were removed. 41 index pages have been deleted, 41 are currently reusable. CPU 0.32s/1.40u sec elapsed 70.66 sec. INFO: index url_crc now contains 215141 row versions in 497 pages DETAIL: 108343 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.06s/0.96u sec elapsed 9.13 sec. INFO: index url_seed now contains 348458 row versions in 2987 pages DETAIL: 229515 index row versions were removed. 345 index pages have been deleted, 345 are currently reusable. CPU 0.40s/2.38u sec elapsed 74.26 sec. INFO: index url_referrer now contains 349509 row versions in 1964 pages DETAIL: 229515 index row versions were removed. 65 index pages have been deleted, 65 are currently reusable. CPU 0.34s/1.53u sec elapsed 127.37 sec. INFO: index url_next_index_time now contains 349519 row versions in 3534 pages DETAIL: 229515 index row versions were removed. 3071 index pages have been deleted, 2864 are currently reusable. CPU 0.32s/0.67u sec elapsed 76.25 sec. INFO: index url_status now contains 349520 row versions in 3465 pages DETAIL: 229515 index row versions were removed. 2383 index pages have been deleted, 2256 are currently reusable. CPU 0.35s/0.85u sec elapsed 89.25 sec. INFO: index url_bad_since_time now contains 349521 row versions in 2017 pages DETAIL: 229515 index row versions were removed. 38 index pages have been deleted, 38 are currently reusable. CPU 0.54s/1.46u sec elapsed 83.77 sec. INFO: index url_hops now contains 349620 row versions in 3558 pages DETAIL: 229515 index row versions were removed. 1366 index pages have been deleted, 1356 are currently reusable. CPU 0.43s/0.91u sec elapsed 132.14 sec. INFO: index url_siteid now contains 350551 row versions in 3409 pages DETAIL: 229515 index row versions were removed. 2310 index pages have been deleted, 2185 are currently reusable. CPU 0.35s/1.01u sec elapsed 85.08 sec. INFO: index url_serverid now contains 350552 row versions in 3469 pages DETAIL: 229515 index row versions were removed. 1014 index pages have been deleted, 1009 are currently reusable. CPU 0.54s/1.01u sec elapsed 120.40 sec. INFO: index url_url now contains 346563 row versions in 6494 pages DETAIL: 213608 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 1.35s/2.07u sec elapsed 285.05 sec. INFO: index url_last_mod_time now contains 346734 row versions in 1106 pages DETAIL: 213608 index row versions were removed. 27 index pages have been deleted, 17 are currently reusable. CPU 0.17s/0.95u sec elapsed 17.92 sec. INFO: url: removed 229515 row versions in 4844 pages DETAIL: CPU 0.53s/1.26u sec elapsed 375.64 sec. INFO: url: found 229515 removable, 310913 nonremovable row versions in 26488 pages DETAIL: 29063 dead row versions cannot be removed yet. There were 3907007 unused item pointers. 192 pages are entirely empty. CPU 7.78s/17.09u sec elapsed 3672.29 sec. INFO: vacuuming pg_toast.pg_toast_127397204 INFO: index pg_toast_127397204_index now contains 0 row versions in 1 pages DETAIL: 0 index pages have been deleted, 0 are currently reusable. CPU 0.00s/0.00u sec elapsed 0.06 sec. INFO: pg_toast_127397204: found 0 removable, 0 nonremovable row versions in 0 pages DETAIL: 0 dead row versions cannot be removed yet. There were 0 unused item pointers. 0 pages are entirely empty. CPU 0.00s/0.00u sec elapsed 0.07 sec. VACUUM Is the solution to reindex data ? In 7.4 a VACUUM should be sufficient ... or at least, if it isn't I'd like to know why not before you destroy the evidence by reindexing. Yes, of course. Cordialement, Jean-Gérard Pailloncy ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PERFORM] Index Backward Scan fast / Index Scan slow ! (Modifié par Pailloncy Jean-Gérard)
Hi, In 7.4 a VACUUM should be sufficient ... or at least, if it isn't Atfer VACUUM: dps=# explain analyze select next_index_time from url order by next_index_time desc limit 1; QUERY PLAN -- Limit (cost=0.00..2.62 rows=1 width=4) (actual time=0.098..0.099 rows=1 loops=1) - Index Scan Backward using url_next_index_time on url (cost=0.00..814591.03 rows=310913 width=4) (actual time=0.096..0.096 rows=1 loops=1) Total runtime: 0.195 ms (3 rows) dps=# explain analyze select next_index_time from url order by next_index_time asc limit 1; QUERY PLAN - Limit (cost=0.00..2.62 rows=1 width=4) (actual time=13504.105..13504.106 rows=1 loops=1) - Index Scan using url_next_index_time on url (cost=0.00..814591.03 rows=310913 width=4) (actual time=13504.099..13504.099 rows=1 loops=1) Total runtime: 13504.158 ms (3 rows) Better, but.. Cordialement, Jean-Gérard Pailloncy ---(end of broadcast)--- TIP 8: explain analyze is your friend
[PERFORM] slow plan for min/max
I have: psql (PostgreSQL) 7.3.2 I do a modification of 'access/index/indexam.c' where I comment: #ifdef NOT_USED if (scan-keys_are_unique scan-got_tuple) { if (ScanDirectionIsForward(direction)) { if (scan-unique_tuple_pos = 0) scan-unique_tuple_pos++; } else if (ScanDirectionIsBackward(direction)) { if (scan-unique_tuple_pos = 0) scan-unique_tuple_pos--; } if (scan-unique_tuple_pos == 0) return heapTuple; else return NULL; } #endif I do not remember the references of the bug. But the solution was planned for 7.4. I do: psql=# \di [skip] public | url_next_index_time | index | postgresql | url [skip] (11 rows) I have an index on next_index_time field on table url. psql=# explain select min(next_index_time) from url \g QUERY PLAN --- Aggregate (cost=85157.70..85157.70 rows=1 width=4) - Seq Scan on url (cost=0.00..80975.56 rows=1672856 width=4) (2 rows) Silly SeqScan of all the table. psql=# explain SELECT next_index_time FROM url ORDER BY next_index_time LIMIT 1 \g QUERY PLAN --- - Limit (cost=0.00..0.20 rows=1 width=4) - Index Scan using url_next_index_time on url (cost=0.00..340431.47 rows=1672856 width=4) (2 rows) I ask for the same thing. That's better ! Why the planner does that ? Jean-Gérard Pailloncy Paris, France ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly