Steinar H. Gunderson wrote:

On Tue, Aug 31, 2004 at 11:11:02AM -0700, Ron St-Pierre wrote:

We have a web based application with data that is updated daily. The biggest bottleneck occurs when we try to update
one of the tables. This table contains 58,000 rows and 62 columns, and EVERY column is indexed.

That is usually a very bad idea; for every update, PostgreSQL has to update
62 indexes. Do you really do queries on all those 62 columns?

Yes, I know that it's not a very good idea, however queries are allowed against all of those columns. One option is to disable some or all of the
indexes when we update, run the update, and recreate the indexes, however it may slow down user queries. Because there are so many indexes,
it is time consuming to recreate them after the update.

A typical update is:
UPDATE dataTable SET field01=44.5, field02=44.5, field03='Bob',
field04='foo', ... , field60='2004-08-30', field61='2004-08-29'
WHERE id = 1234;

That looks like poor database normalization, really. Are you sure you don't want to split this into multiple tables instead of having 62 columns?

No, it is properly normalized. The data in this table is stock fundamentals, stuff like 52 week high, ex-dividend date, etc, etc.

Other notables:
The UPDATE is run from a within a function: FOR rec IN SELECT ...LOOP RETURN NEXT rec; UPDATE dataTable.....
Postgres 7.4.3
debian stable
80 DB IDE drive (we can't change it)

Are you doing all this in multiple transactions, or in a sngle one? Wrapping the FOR loop in a transaction might help speed.

We're doing it in multiple transactions within the function. Could we do something like this?:

FOR rec IN SELECT field01, field02, ... FROM otherTable LOOP RETURN NEXT rec;
UPDATE dataTable SET field01=rec.field01, field02=rec.field02, rec.field03=field03, ...
WHERE id =;

If we can do it this way, are there any other gotcha's we should be aware of?


---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives?

Reply via email to