Gregory Stark schrieb:
But with your numbers things look even weirder. With a 90MB/s sequential speed (91us) and 9ms seek latency that would be a random_page_cost of nearly 100!
Looks good :). If you actually want to base something on Real World numbers I'd suggest that we collect them beforehand from existing setups. I was introduced to IOmeter [1] at an HP performance course which is a nice GUI Tool which allows you to define workloads to your likings and test it against given block devices, unfortunately it's Windows only. fio [2] and Iozone [3] should do the same for the Unix-World, without the "nice" and "GUI" parts ;).
For improving the model - in what situations would we benefit from a more accurate model here?
Is it correct that this is only relevant for large (if not huge) tables which border on (or don't fit in) effective_cache_size (and respectively - the OS Page cache)?
And we need the cost to decide between a sequential, index (order by, small expected result set) and a bitmap index scan?
Speaking of bitmap index/heap scans - are those counted against seq or random_page_cost?
regards, michael [1] http://www.iometer.org/ [2] http://freshmeat.net/projects/fio/ [3] http://www.iozone.org/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers