Ron,Yeah I agree but I'm not allowed to remove those indexes.
The emp table has 60 columns, all indexed, about two-thirds are numeric, but they are not affected by this update. The other 50+ columns are updated in the middle of the night and the amount of time that update takes isn't a concern.
Well, I'd say that you have an application design problem, but that's not what you asked for help with ;-)
On my dev server I increased max_fsm_pages from the default of 20000 to 40000, increased checkpoint_segments from 3 to 5, and the function ran in about 6-7 minutes which is a nice increase. According to the docs "Annotated postgresql.conf and Global User Configuration (GUC) Guide" on varlena I'll have to re-start postgres for the changes to take effect there (correct?). Also the docs on Varlena show the max_fsm_pages default to be 10,000 but my default was 20,000, looks like that needs updating.
Late last night I dumped the table, dropped it and re-created it from the dump (on the production server - when no one was looking). When I re-ran the function it took almost 11 minutes, which was pretty much in line with my results from the dev server.
Sounds like you need to run a REINDEX on the table -- and after that, dramatically increase your max_fsm_pages, and run lazy VACUUM immediately after the batch update to clean up.
Thanks for your help Josh, I'll see after the weekend what the impact the changes will have on the production server.
---------------------------(end of broadcast)--------------------------- TIP 7: don't forget to increase your free space map settings