Hello

did you vacuum?

It's good technique do vacuum table after remove bigger number of rows.

Regards
Pavel Stehule

2007/7/22, Jozsef Szalay <[EMAIL PROTECTED]>:




I'm having this very disturbing problem. I got a table with about 100,000
rows in it. Our software deletes the majority of these rows and then bulk
loads another 100,000 rows into the same table. All this is happening within
a single transaction. I then perform a simple "select count(*) from …"
statement that never returns. In the mean time, the backend Postgres process
is taking close to 100% of the CPU. The hang-up does not always happen on
the same statement but eventually it happens 2 out of 3 times. If I dump and
then restore the schema where this table resides the problem is gone until
the next time we run through the whole process of deleting, loading and
querying the table.



There is no other activity in the database. All requested locks are granted.



Has anyone seen similar behavior?



Some details:



Postgres v 8.1.2

Linux Fedora 3



shared_buffers = 65536

temp_buffers = 32768

work_mem = 131072

maintenance_work_mem = 131072

max_stack_depth = 8192

max_fsm_pages = 40000

wal_buffers = 16

checkpoint_segments = 16





top reports



  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND

19478 postgres  25   0  740m 721m 536m R 99.7  4.4 609:41.16 postmaster



ps –ef | grep postgres reports



postgres 19478  8061 99 00:11 ?        10:13:03 postgres: user dbase [local]
SELECT



strace –p 19478

no system calls reported





Thanks for the help!

Jozsef

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to