Sorry I meant to say, "To track transactions that *have been* left idle but not committed or rolled back you would..." Typo
*Will J. Dunn* *willjdunn.com <http://willjdunn.com>* On Tue, Jun 30, 2015 at 4:33 PM, William Dunn <dunn...@gmail.com> wrote: > On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel < > lukasz.wro...@motorolasolutions.com> wrote: >> >> >> Perhaps I'm missing some indexes on the tables (creating them on the >> columns on which the where clause was used in the long queries seemed to >> halve their times). Also how can I monitor my transactions and if they are >> closed properly? >> > > To track transactions that have not been left idle but not committed or > rolled back you would: > > 1) Set track_activities true in the config (doc: > http://www.postgresql.org/docs/current/static/runtime-config-statistics.html#GUC-TRACK-ACTIVITIES > ) > 2) Query the pg_stat_activity view for connections where state = 'idle in > transaction' (doc: > http://www.postgresql.org/docs/current/static/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW > ) > > As you would suspect, transactions that have been left "idle in > transaction" prevent vacuum from removing old tuples (because they are > still in scope for that transaction) > > *Will J. Dunn* > *willjdunn.com <http://willjdunn.com>* > > On Tue, Jun 30, 2015 at 4:27 PM, William Dunn <dunn...@gmail.com> wrote: > >> Hello Lukasz, >> >> On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel < >> lukasz.wro...@motorolasolutions.com> wrote: >> >>> >>> There doesn't seem to be any issues with disk space, memory or CPU, as >>> neither of those is even 50% used (as per df and top). >>> >> >> Are you using the default PostgreSQL configuration settings, or have you >> custom tuned them? The default settings are targeted for wide compatibility >> and are not optimized for performance. If PostgreSQL is performing badly >> and using a small amount of system resources it is likely some tuning is >> needed. See docs: >> http://www.postgresql.org/docs/current/static/runtime-config.html >> >> >> On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel < >> lukasz.wro...@motorolasolutions.com> wrote: >> >>> >>> For whatever reason there is also no data in pg_stat* tables. >>> >> >> You can also turn on tracking (for statistics views) by enabling >> statistics collection in the config >> http://www.postgresql.org/docs/current/static/runtime-config-statistics.html >> >> *Will J. Dunn* >> *willjdunn.com <http://willjdunn.com>* >> >> On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel < >> lukasz.wro...@motorolasolutions.com> wrote: >> >>> Hello. >>> >>> I have multiple problems with my database, the biggest of which is how >>> to find out what is actually wrong. >>> >>> First of all I have a 9.3 postgres database that is running for about a >>> month. Right now the queries on that database are running very slowly >>> (select with a simple "where" on a non-indexed column on a table with about >>> 5000 records takes 1,5s, a complicated hibernate select with 7 joins on >>> tables of about 5000 records takes about 15s, insert or update on a table >>> with 35000 records takes up to 20 mins). >>> >>> The tables and indexes on those tables are bloated to the point where >>> this query: https://wiki.postgresql.org/wiki/Show_database_bloat shows >>> wasted bytes in hundreds of MB. >>> >>> For whatever reason there is also no data in pg_stat* tables. >>> >>> So due to the long query times, there are multiple errors in my >>> application logs like "No free connection available" or "Could not >>> synchronize database state with session", or "Failed to rollback >>> transaction" and the application fails to start in the required time. >>> >>> The only thing that helps fix the situation seems to be vacuum full of >>> the entire database. Regular vacuum doesn't even lower the dead tuples >>> count (which appear by the thousands during application launching). Reindex >>> of all the indexes in the database didn't help as well. All autovacuum >>> parameters are default. >>> >>> There doesn't seem to be any issues with disk space, memory or CPU, as >>> neither of those is even 50% used (as per df and top). >>> >>> Is there any good tool that will monitor the queries and generate a >>> report with useful information on what might be the problem? I tried >>> pg_badger, but all I got were specific queries and their times, but the >>> long query times are just one of the symptoms of what's wrong with the >>> database, not the cause. >>> >>> Perhaps I'm missing some indexes on the tables (creating them on the >>> columns on which the where clause was used in the long queries seemed to >>> halve their times). Also how can I monitor my transactions and if they are >>> closed properly? >>> >>> I will be grateful for any help and if you need more details I can >>> provide them if possible. >>> >>> Best regards. >>> Lukasz >>> >> >> >