We have a web based application with data that is updated daily. The biggest bottleneck occurs when we try to update
one of the tables. This table contains 58,000 rows and 62 columns, and EVERY column is indexed. Every column is
queryable (?) by the users through the web interface so we are reluctant to remove the indexes (recreating them would
be time consuming too). The primary key is an INT and the rest of the columns are a mix of NUMERIC, TEXT, and DATEs.
A typical update is:
UPDATE dataTable SET field01=44.5, field02=44.5, field03='Bob',
field04='foo', ... , field60='2004-08-30', field61='2004-08-29'
WHERE id = 1234;


Also of note is that the update is run about 10 times per day; we get blocks of data from 10 different sources, so we pre-process the
data and then update the table. We also run VACUUM FULL ANALYZE on a nightly basis.


Does anyone have some idea on how we can increase speed, either by changing the updates, designing the database
differently, etc, etc? This is currently a big problem for us.


Other notables:
The UPDATE is run from a within a function: FOR rec IN SELECT ...LOOP RETURN NEXT rec; UPDATE dataTable.....
Postgres 7.4.3
debian stable
2 GB RAM
80 DB IDE drive (we can't change it)


shared_buffers = 2048
sort_mem = 1024 max_fsm_pages = 40000
checkpoint_segments = 5
random_page_cost = 3



Thanks

Ron



---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Reply via email to