I've got a table with ~121 million records in it. Select count on it currently takes ~45 minutes, and an update to the table to set a value on one of the columns I finally killed after it ran 17 hours and had still not completed. Queries into the table are butt slow, and

The update query that started this all I had to kill after 17hours. It should have updated all 121+ million records. That brought my select count down to 19 minutes, but still a far cry from acceptable.

If you have a column that needs to be updated often for all rows, separate it into a different table, and create a view that joins it back to the main table so that your application still sees the old schema.

This will greatly speed your update since (in Postgres) and update is the same as a delete+insert. By updating that one column, you're re-writing your entire 121 million rows. If you separate it, you're only rewriting that one column. Don't forget to vacuum/analyze and reindex when you're done.

Better yet, if you can stand a short down time, you can drop indexes on that column, truncate, then do 121 million inserts, and finally reindex. That will be MUCH faster.

Craig



---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to