On Wed, 25 Jun 2008, jay wrote:

Why postgresql is so slowly? Is the PG MVCC problem?

Update is extremely intensive not just because of MVCC, but because a new version of all the rows are being written out. This forces both lots of database commits and lots of complicated disk I/O to accomplish.

Couple of suggestions:
-Increase checkpoint_segments a lot; start with a 10X increase to 30.
-If you can afford some potential for data loss in case of a crash, consider using async commit:
http://www.postgresql.org/docs/8.3/static/wal-async-commit.html

        Memory 8G, 8 piece 15K disk , 2CPU(Quad-Core) AMD

Is there any sort of write cache on the controller driving those disks? If not, or if you've turned it off, that would explain your problem right there, because you'd be limited by how fast you can sync to disk after each update. Async commit is the only good way around that. If you have a good write cache, that feature won't buy you as much improvement.

bgwriter_delay = 20ms                   # 10-10000ms between rounds
bgwriter_lru_maxpages = 500             # 0-1000 max buffers written/round
bgwriter_lru_multiplier = 2.0           # 0-10.0 multipler on buffers

This a bit much and the background writer can get in the way in this situation. You might turn it off (bgwriter_lru_maxpages = 0) until you've sorted through everything else, then increase that parameter again. The combination of 20ms and 500 pages is far faster than your disk system can possibly handle anyway; 100ms/500 or 20ms/100 (those two are approximately the same) would be as aggressive as I'd even consider with an 8-disk array, and something lower is probably more appropriate for you.

--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to