Robert Haas <robertmh...@gmail.com> writes:
> This issue of detoasting costs comes up a lot, specifically in
> reference to @@.  I wonder if we shouldn't try to apply some quick and
> dirty hack in time for 9.2, like maybe random_page_cost for every row
> or every attribute we think will require detoasting.  That's obviously
> going to be an underestimate in many if not most cases, but it would
> probably still be an improvement over assuming that detoasting is
> free.

Well, you can't theorize without data, to misquote Sherlock.  We'd need
to have some stats on which to base "we think this will require
detoasting".  I guess we could teach ANALYZE to compute and store
fractions "percent of entries in this column that are compressed"
and "percent that are stored out-of-line", and then hope that those
percentages apply to the subset of entries that a given query will
visit, and thereby derive a number of operations to multiply by whatever
we think the cost-per-detoast-operation is.

It's probably all do-able, but it seems way too late to be thinking
about this for 9.2.  We've already got a ton of new stuff that needs
to be polished and tuned...

                        regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to