On Wed, Feb 8, 2012 at 6:47 PM, Peter van Hardenberg <p...@pvh.ca> wrote:
> On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe <scott.marl...@gmail.com> wrote:
>> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg <p...@pvh.ca> wrote:
>>> That said, I have access to a very large fleet in which to can collect
>>> data so I'm all ears for suggestions about how to measure and would
>>> gladly share the results with the list.
>>
>> I wonder if some kind of script that grabbed random queries and ran
>> them with explain analyze and various random_page_cost to see when
>> they switched and which plans are faster would work?
>
> We aren't exactly in a position where we can adjust random_page_cost
> on our users' databases arbitrarily to see what breaks. That would
> be... irresponsible of us.
>

Oh, of course we could do this on the session, but executing
potentially expensive queries would still be unneighborly.

Perhaps another way to think of this problem would be that we want to
find queries where the cost estimate is inaccurate.

-- 
Peter van Hardenberg
San Francisco, California
"Everything was beautiful, and nothing hurt." -- Kurt Vonnegut

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to