Re: [PERFORM] Slow queries on big table
Tyrrill, Ed wrote: > mdsdb=# \d backup_location > Table "public.backup_location" > Column | Type | Modifiers > ---+-+--- > record_id | bigint | not null > backup_id | integer | not null > Indexes: > "backup_location_pkey" PRIMARY KEY, btree (record_id, backup_id) > "backup_location_rid" btree (record_id) > Foreign-key constraints: > "backup_location_bfk" FOREIGN KEY (backup_id) REFERENCES > backups(backup_id) ON DELETE CASCADE [snip] > mdsdb=# explain analyze select record_id from backup_location where > backup_id = 1070; > > QUERY PLAN > > > > - > Index Scan using backup_location_pkey on backup_location > (cost=0.00..1475268.53 rows=412394 width=8) (actual > time=3318.057..1196723.915 rows=2752 loops=1) >Index Cond: (backup_id = 1070) > Total runtime: 1196725.617 ms > (3 rows) The "backup_location_rid" index on your table is not necessary. The primary key index on (record_id, backup_id) can be used by Postgres, even if the query is only constrained by record_id. See http://www.postgresql.org/docs/8.2/interactive/indexes-multicolumn.html for details. The explain plan indicates that your query is filtered on backup_id, but is using the primary key index on (record_id, backup_id). Based on the table definition, you do not have any good index for filtering on backup_id. The explain plan also seems way off, as I would expect a sequential scan would be used without a good index for backup_id. Did you disable sequential scans before running this query? Have you altered any other configuration or planner parameters? As your "backup_location_rid" is not necessary, I would recommend dropping that index and creating a new one on just backup_id. This should be a net wash on space, and the new index should make for a straight index scan for the query you presented. Don't forget to analyze after changing the indexes. Hope this helps. Andrew ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PERFORM] Performance problems with large telemetric datasets on 7.4.2
Sven Clement wrote: > Table: "public.tmdata" ... > id| integer | default -2147483684::bigint ... > Table: "public.tmdataintervalsec" ... > id| integer | default -2147483684::bigint Not that this directly addresses the performance issues you described, but I have already seen 2 recommendations that you upgrade... With the table definitions you posted, one of the first things I noticed was that the default value for an integer column was a bigint value. I did some quick 32-bit math and found that the smallest legal 32-bit integer value is -2147483648, not -2147483684 (notice the last 2 numbers are transposed). I checked your previous post and saw that you are currently running PG 7.4.2/7.4.7 (subject says 7.4.2, but you indicate 7.4.7 in the body of your original post). I did a quick check on my 8.1.9 box using the same bigint default value for an integer column and received an "integer out of range" error when I attempted to use the default value. I don't know the exact workings of your system, but you'll need to watch out for any cases where the default value for the id columns was used. If that default value was used (and was allowed by your PG version) you will probably have values in the id column that are not what you'd expect. I don't know how a bigint would be coerced into an integer, but it would probably truncate in some form which would give you positive values in the id column where you expected the smallest 32-bit integer value (i.e. -2147483648). I don't know if this was ever actually an issue (if you never rely on the default value for the id column -- maybe version 7.4.7 would generate the same error if you did), but if it was, you need to look at a couple of things before upgrading (whether to a more recent 7.4.X or 8.2.4): 1. If you do rely on the default clause for the id column, you may encounter the "integer out of range" errors with your existing codebase. 2. You may have values in the id column that are supposed to represent the smallest 32-bit integer that may in fact be positive integers. You will probably want to investigate these potential issues and perform any necessary schema changes and data cleanup before attempting any upgrade. Again, I'm not sure if this was ever an issue or if this issue has any effects on your database. I don't have any PG machines running anything prior to 8.1.X, so I can't really test these. I just saw the bigint value as a default for an integer column and it caught my eye. Hope this might help you avoid some problems when upgrading. Andrew ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate