Hello Perf,

Lately I've been pondering. As systems get more complex, it's not uncommon for tiered storage to enter the picture. Say for instance, a user has some really fast tables on a NVRAM-based device, and slower-access stuff on a RAID, even slower stuff on an EDB, and variants like local disk or a RAM drive.

Yet there's only one global setting for random_page_cost, and seq_page_cost, and so on.

Would there be any benefit at all to adding these as parameters to the tablespaces themselves? I can imagine the planner could override the default with the tablespace setting on a per-table basis when calculating the cost of retrieving rows from tables/indexes on faster or slower storage.

This is especially true since each of the storage engines I listed have drastically different performance profiles, but no way to hint to the planner. There was a talk at the last PG Open about his EDB tests vastly preferring partitioning and sequential access because random access was so terrible. But NVRAM has the opposite metric. Currently, tuning for one necessarily works against the other.

I didn't see anything in the Todo Wiki, so I figured I'd ask. :)

Thanks!

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-444-8534
stho...@optionshouse.com

______________________________________________

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to