Re: [PERFORM] PG optimization question

2010-01-09 Thread Ludwik Dylag
2010/1/9 Nickolay ni...@zhukcity.ru Okay, I see your point with staging table. That's a good idea! The only problem I see here is the transfer-to-archive-table process. As you've correctly noticed, the system is kind of a real-time and there can be dozens of processes writing to the staging

Re: [PERFORM] Massive table (500M rows) update nightmare

2010-01-07 Thread Ludwik Dylag
I would suggest: 1. turn off autovacuum 1a. ewentually tune db for better performace for this kind of operation (cant not help here) 2. restart database 3. drop all indexes 4. update 5. vacuum full table 6. create indexes 7. turn on autovacuum Ludwik 2010/1/7 Leo Mannhart leo.mannh...@beecom.ch

Re: [PERFORM] database size growing continously

2009-10-29 Thread Ludwik Dylag
2009/10/29 Peter Meszaros p...@prolan.hu Hi All, I use postgresql 8.3.7 as a huge queue. There is a very simple table with six columns and two indices, and about 6 million records are written into it in every day continously commited every 10 seconds from 8 clients. The table stores

[PERFORM] Query logging time, not values

2009-10-08 Thread Ludwik Dylag
HiI have a database and ~150 clients non-stop writing to this database quite big pieces of text. I have a performacne problem so I tried to increase log level, so I could see which queries take most time. My postgresql.conf (Log section) is: log_destination = 'stderr' logging_collector = on

[PERFORM] disable heavily updated (but small) table auto-vecuuming

2009-09-15 Thread Ludwik Dylag
Hello I have a database where I daily create a table. Every day it is being inserted with ~3mln rows and each of them is being updated two times.The process lasts ~24 hours so the db load is the same at all the time. total size of the table is ~3GB. My current vacuum settings are: autovacuum = on