On Tuesday 09 September 2003 13:40, Mindaugas Riauba wrote:
>   Hello,
>
>   I have small table (up to 10000 rows) and every row will be updated
> once per minute. Table also has "before update on each row" trigger
> written in plpgsql. But trigger 99.99% of the time will do nothing
> to the database. It will just compare old and new values in the row
> and those values almost always will be identical.
>
>   Now I tried simple test and was able to do 10000 updates on 1000
> rows table in ~30s. That's practically enough but I'd like to have
> more room to slow down.
>   Also best result I achieved by doing commit+vacuum every ~500
> updates.
>
>   How can I improve performance and will version 7.4 bring something
> valuable for my task? Rewrite to some other scripting language is not
> a problem. Trigger is simple enough.

Well, try it without the trigger. If performance improves markedly, it might 
be worth rewriting in C.

If not, you're probably saturating the disk I/O - using iostat/vmstat will let 
you see what's happening. If it is your disks, you might see if moving the 
WAL onto a separate drive would help, or check the archives for plenty of 
discussion about raid setups.

>   Postgres v7.3.4, shared_buffers=4096 max_fsm settings also bumped up
> 10 times.

Well effective_cache_size is useful for reads, but won't help with writing. 
You might want to look at wal_buffers and see if increasing that helps, but I 
couldn't say for sure.
-- 
  Richard Huxton
  Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Reply via email to