On Aug 29, 2006, at 7:52 AM, Willo van der Merwe wrote: Hi, Hi, What about doing a little bit of normalization? With 700k rows you could probably gain some improvements by: * normalizing the type and user columns to integer keys (dropping the 8 byte overhead for storing the field lengths) * maybe change the type column so that its a smallint if there is just a small range of possible values (emulating a enum type in other databases) rather the joining to another table. * maybe move message (if the majority of the rows are big and not null but not big enough to be TOASTed, ergo causing only a small number of rows to fit onto a 8k page) out of this table into a separate table that is joined only when you need the column's content. Doing these things would fit more rows onto each page, making the scan less intensive by not causing the drive to seek as much. Of course all of these suggestions depend on your workload. Cheers, -- Rusty Conover InfoGears Inc. |
- Re: [PERFORM] PostgreSQL performance issues Rusty Conover
- Re: [PERFORM] PostgreSQL performance issues Willo van der Merwe
- Re: [PERFORM] PostgreSQL performance issues Alan Hodgson
- Re: [PERFORM] PostgreSQL performance issues Codelogic
- Re: [PERFORM] PostgreSQL performance issues Merlin Moncure
- Re: [PERFORM] PostgreSQL performance issues Willo van der Merwe
- Re: [PERFORM] PostgreSQL performance issu... Alex Hayward
- Re: [PERFORM] PostgreSQL performance issu... Willo van der Merwe
- Re: [PERFORM] PostgreSQL performance issu... Merlin Moncure
- Re: [PERFORM] PostgreSQL performance issues Luke Lonergan
- Re: [PERFORM] PostgreSQL performance issues Willo van der Merwe