Re: [HACKERS] Bitmap scan cost model (was Re: bitmap scans, btree scans, and tid order)
Incrementing random_page_cost from 4 (the default) to 5 causes the planner to make a better decision. We have such a low default random_page_cost primarily to mask other problems in the optimizer, two of which are . multi-column index correlation . interpolation between min_IO_Cost and max_IO_cost which approximates max_IO_cost too fast. One other important figure here is concurrency. If you have a lot of backends concurrently doing other IO, your sequential IO numbers will suffer more than random IO numbers. Might be, that some super smart OS readahead implementation aleviates that problem, but I have not yet experienced one. Also less of random IO tends to get higher cache rates. Thus I think if you are alone, 4 tends to be too low, while with concurrent load 4 tends to be too high. (All assuming farly large tables, that don't fit into RAM) Andreas ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Bitmap scan cost model (was Re: bitmap scans, btree scans, and tid order)
But to get the estimated cost ratio to match up with the actual cost ratio, we'd have to raise random_page_cost to nearly 70, which is a bit hard to credit. What was the platform being tested here? Why ? Numbers for modern single disks are 1-2Mb/s 8k random and 50-120 Mb/s sequential. Andreas ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Bitmap scan cost model (was Re: bitmap scans, btree scans, and tid order)
On Tue, 17 May 2005 22:12:17 -0700, Jeffrey W. Baker [EMAIL PROTECTED] wrote: Incrementing random_page_cost from 4 (the default) to 5 causes the planner to make a better decision. We have such a low default random_page_cost primarily to mask other problems in the optimizer, two of which are . multi-column index correlation . interpolation between min_IO_Cost and max_IO_cost which approximates max_IO_cost too fast. Servus Manfred ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Bitmap scan cost model (was Re: bitmap scans, btree scans, and tid order)
Jeffrey W. Baker [EMAIL PROTECTED] writes: ... If bitmap scan is disabled, the planner will pick index scan even in cases when sequential scan is 10x faster: scratch=# set enable_bitmapscan to off; SET scratch=# explain analyze select count(1) from test where random = 1429076987 and random 1429076987 + 1000; QUERY PLAN Aggregate (cost=170142.03..170142.03 rows=1 width=0) (actual time=177419.182..177419.185 rows=1 loops=1) - Index Scan using test_rand_idx on test (cost=0.00..170034.11 rows=43167 width=0) (actual time=0.035..177255.696 rows=46764 loops=1) Index Cond: ((random = 1429076987) AND (random 1439076987)) Total runtime: 177419.302 ms (4 rows) scratch=# set enable_indexscan to off; SET scratch=# explain analyze select count(1) from test where random = 1429076987 and random 1429076987 + 1000; QUERY PLAN -- Aggregate (cost=204165.55..204165.55 rows=1 width=0) (actual time=12334.042..12334.045 rows=1 loops=1) - Seq Scan on test (cost=0.00..204057.62 rows=43167 width=0) (actual time=17.436..12174.150 rows=46764 loops=1) Filter: ((random = 1429076987) AND (random 1439076987)) Total runtime: 12334.156 ms (4 rows) Obviously in this case sequential scan was (would have been) a huge win. Incrementing random_page_cost from 4 (the default) to 5 causes the planner to make a better decision. But to get the estimated cost ratio to match up with the actual cost ratio, we'd have to raise random_page_cost to nearly 70, which is a bit hard to credit. What was the platform being tested here? regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match