Thanks so much for your help.

I can select the 80K data out of 29K rows very fast, but we I delete them, it 
always just hangs there(> 4 hours without finishing), not deleting anything at 
all. Finally, I select pky_col where cola = 'abc', and redirect it to an out 
put file with a list of pky_col numbers, then put them in to a script with 80k 
lines of individual delete, then it ran fine, slow but actually doing the 
delete work:

delete from test where pk_col = n1;
delete from test where pk_col = n2;
...

My next question is: what is the difference between "select" and "delete"? 
There is another table that has one foreign key to reference the test (parent) 
table that I am deleting from and this foreign key does not have an index on it 
(a 330K row table).

Deleting one row at a time is fine: delete from test where pk_col = n1;

but deleting the big chunk all together (with  80K rows to delete) always 
hangs: delete from test where cola = 'abc';

I am wondering if I don't have enough memory to hold and carry on the 80k-row 
delete.....
but how come I can select those 80k-row very fast? what is the difference   
between select and delete?

Maybe the foreign key without an index does play a big role here, a 330K-row 
table references a 29K-row table will get a lot of table scan on the foreign 
table to check if each row can be deleted from the parent table... Maybe select 
from the parent table does not have to check the child table?

Thank you for pointing out about dropping the constraint first, I can imagine 
that  it will be a lot faster.

But what if  it is a memory issue that prevent me from deleting the 80K-row all 
at once, where do I check about the memory issue(buffer pool) how to tune it on 
the memory side?

Thanks a lot,
Jessica




----- Original Message ----
From: Craig Ringer <[EMAIL PROTECTED]>
To: Jessica Richard <[EMAIL PROTECTED]>
Cc: pgsql-performance@postgresql.org
Sent: Friday, July 4, 2008 1:16:31 AM
Subject: Re: [PERFORM] slow delete

Jessica Richard wrote:
> I have a table with 29K rows total and I need to delete about 80K out of it.

I assume you meant 290K or something.

> I have a b-tree index on column cola (varchar(255) ) for my where clause 
> to use.
> 
> my "select count(*) from test where cola = 'abc' runs very fast,
>  
> but my actual "delete from test where cola = 'abc';" takes forever, 
> never can finish and I haven't figured why....

When you delete, the database server must:

- Check all foreign keys referencing the data being deleted
- Update all indexes on the data being deleted
- and actually flag the tuples as deleted by your transaction

All of which takes time. It's a much slower operation than a query that 
just has to find out how many tuples match the search criteria like your 
SELECT does.

How many indexes do you have on the table you're deleting from? How many 
foreign key constraints are there to the table you're deleting from?

If you find that it just takes too long, you could drop the indexes and 
foreign key constraints, do the delete, then recreate the indexes and 
foreign key constraints. This can sometimes be faster, depending on just 
what proportion of the table must be deleted.

Additionally, remember to VACUUM ANALYZE the table after that sort of 
big change. AFAIK you shouldn't really have to if autovacuum is doing 
its job, but it's not a bad idea anyway.

--
Craig Ringer

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



      

Reply via email to