Detect missing combined indexes (automatically)
Is there a way to detect missing combined indexes automatically I am managing a lot of databases and I think a lot of performance could get gained. But I don't want to do this manually. My focus is on missing combined indexes, since for missing single indexes there are already tools available. Regards, Thomas Güttler -- Thomas Guettler http://www.thomas-guettler.de/ I am looking for feedback: https://github.com/guettli/programming-guidelines
Automated bottleneck detection
This sound good. Looks like an automated bootleneck detection could be possible with pg_wait_sampling. Regards, Thomas Am 25.07.2018 um 12:25 schrieb Julien Rouhaud: On Wed, Jul 25, 2018 at 11:14 AM, Thomas Güttler wrote: AFAIK powa is based on pg_stat_statements not on statistical samples. But maye I am wrong. Indeed, it's based on pg_stat_statements, but other extensions are supported too. Since pg_stat_statements already provides cumulated counters, there's no need to do sampling. But if you're interested in wait events information for instance, it supports (in development version) pg_wait_sampling extension, which does sampling to provide efficient and informative informations. -- Thomas Guettler http://www.thomas-guettler.de/ I am looking for feedback: https://github.com/guettli/programming-guidelines
Re: Profile what the production server is doing
Am 23.07.2018 um 17:16 schrieb Flo Rance: pgobserver might do that as well, particulary useful for functions performances. https://github.com/zalando/PGObserver Thank you for pointing me to this. After googling for "PGObserver powa" I found nice collection of current tools: https://www.quora.com/What-are-the-best-graphical-Monitoring-tools-for-Postgresql BTW, PGObserver seems a bit dated. There are only very few updates during the last months. Is there an successor? Regards, Thomas -- Thomas Guettler http://www.thomas-guettler.de/ I am looking for feedback: https://github.com/guettli/programming-guidelines
Re: Profile what the production server is doing
Am 23.07.2018 um 16:01 schrieb Baron Schwartz: I'm biased, but I think VividCortex (my company's product) is amazing at this. Looks goog, but "Contact us for pricing options" from https://www.vividcortex.com/product/pricing Why do you hide your prices? Regards, Thomas Güttler On Mon, Jul 23, 2018 at 7:18 AM Thomas Güttler mailto:guettl...@thomas-guettler.de>> wrote: Is there a tool which does this for PostgreSQL? Take a "snapshot" of what the server is doing about 10 times per second. Write this to a file. After N hours you can aggregate the file. What does the server do most of the time? Which tables/index gets used the most. Before optimizing a database, I would like to know what is going on in the production system. I know that there are internal tables like pg_stat_statements. But I guess doing a snapshot every N millseconds will present a better picture of what is going in in real life. Is there already a tool which goes this way? Or is there a better way? Regards, Thomas Güttler -- Thomas Guettler http://www.thomas-guettler.de/ I am looking for feedback: https://github.com/guettli/programming-guidelines -- Thomas Guettler http://www.thomas-guettler.de/ I am looking for feedback: https://github.com/guettli/programming-guidelines
Profile what the production server is doing
Is there a tool which does this for PostgreSQL? Take a "snapshot" of what the server is doing about 10 times per second. Write this to a file. After N hours you can aggregate the file. What does the server do most of the time? Which tables/index gets used the most. Before optimizing a database, I would like to know what is going on in the production system. I know that there are internal tables like pg_stat_statements. But I guess doing a snapshot every N millseconds will present a better picture of what is going in in real life. Is there already a tool which goes this way? Or is there a better way? Regards, Thomas Güttler -- Thomas Guettler http://www.thomas-guettler.de/ I am looking for feedback: https://github.com/guettli/programming-guidelines
Details after Load Peak was: OT: Performance of VM
Am 05.02.2018 um 14:26 schrieb Andreas Kretschmer: Am 05.02.2018 um 14:14 schrieb Thomas Güttler: What do you suggest to get some reliable figures? sar is often recommended, see https://blog.2ndquadrant.com/in-the-defense-of-sar/. Can you exclude other reasons like vacuum / vacuum freeze? In the current case it was a problem in the hypervisor. But I want to be prepared for the next time. The tool sar looks good. This way I can generate a chart where I can see peaks. Nice. But one thing is still unclear. Imagine I see a peak in the chart. The peak was some hours ago. AFAIK sar has only the aggregated numbers. But I need to know details if I want to answer the question "Why?". The peak has gone and ps/top/iotop don't help me anymore. Any idea? Regards, Thomas Güttler -- Thomas Guettler http://www.thomas-guettler.de/ I am looking for feedback: https://github.com/guettli/programming-guidelines
OT: Performance of VM
This is a bit off-topic, since it is not about the performance of PG itself. But maybe some have the same issue. We run PostgreSQL in virtual machines which get provided by our customer. We are not responsible for the hypervisor and have not access to it. The IO performance of our application was terrible slow yesterday. The users blamed us, but it seems that there was something wrong with the hypervisor. For the next time I would like to have reliable figures, to underline my guess that the hypervisor (and not our application) is the bottle neck. I have the vague strategy to make some io performance check every N minutes and record the numbers. Of course I could do some dirty scripting, but I would like to avoid to re-invent things. I guess this was already solved by people which have more brain and more experience than I have :-) What do you suggest to get some reliable figures? Regards, Thomas Güttler -- Thomas Guettler http://www.thomas-guettler.de/ I am looking for feedback: https://github.com/guettli/programming-guidelines