On 12/4/15 5:14 PM, Peter Geoghegan wrote:
On Fri, Dec 4, 2015 at 2:44 PM, Jim Nasby<jim.na...@bluetreble.com>  wrote:
>I suspect Cachegrind[1] would answer a lot of these questions (though I've
>never actually used it). I can't get postgres to run under valgrind on my
>laptop, but maybe someone that's been successful at valgrind can try
>cachegrind (It's just another mode of valgrind).
I've used Cachegrind, and think it's pretty good. You still need a
test case that exercises what you're interested in, though.
Distributed costs are really hard to quantify. Sometimes that's
because they don't exist, and sometimes it's because they can only be
quantified as part of a value judgement.

If we had a good way to run cachegrind (and maybe if it was run automatically somewhere) then at least we'd know what effect a patch had on things. (For those not familiar, there is a tool for diffing too cachegrind runs). Knowing is half the battle and all that.

Another interesting possibility would be a standardized perf test. [1] makes an argument for that.

Maybe a useful way to set stuff like this up would be to create support for things to run in travis-ci. Time-based tools presumably would be useless, but something doing analysis like cachegrind would probably be OK (though I think they do cap test runs to an hour or something).

[1] https://news.ycombinator.com/item?id=8426302
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to