Greg Smith <g...@2ndquadrant.com> writes: > On 11/28/2011 05:51 AM, Robert Haas wrote: >> Assuming the feature is off by default (and I can't imagine we'd >> consider anything else), I don't see why that should be cause for >> concern. If the instrumentation creates too much system load, then >> don't use it: simple as that.
> It's not quite that simple though. Releasing a performance measurement > feature that itself can perform terribly under undocumented conditions > has a wider downside than that. Yeah, that's a good point, and the machines on which this would suck are exactly the ones where EXPLAIN ANALYZE creates very large overhead. We don't seem to see a lot of complaints about that anymore, but we do still see some ... and yes, it's documented that EXPLAIN ANALYZE can add significant overhead, but that doesn't stop the questions. > Instrumentation that can itself become a performance problem is an > advocacy problem waiting to happen. As I write this I'm picturing such > an encounter resulting in an angry blog post, about how this proves > PostgreSQL isn't usable for serious systems because someone sees massive > overhead turning this on. Of course, the rejoinder could be that if you see that, you're not testing on serious hardware. But still, I take your point. > Right now the primary exposure to this class > of issue is EXPLAIN ANALYZE. When I was working on my book, I went out > of my way to find a worst case for that[1], > [1] (Dell Store 2 schema, query was "SELECT count(*) FROM customers;") That's pretty meaningless without saying what sort of clock hardware was on the machine... regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers