Hello,

I have the following situation. A table is used to store values that are 
generated quite frequently. A second process will read the values and do 
something with them. After that processing the read values are no longer needed 
and they can (should!) be deleted from the database. However, the second 
process might fail (if the network is not available for example) and in this 
case the values pile up in the table (real situation: 2 GB of data in that 
single table after a network outage of 2 days). Eventually the process will 
work again and those values will be processed - and deleted.

And that is where my problem is. Firebird has a thing with bulk deletions - the 
next time you do a select from the table the deleted records seem to be 
"cleaned up" (don't know the technical expression), and if you have deleted a 
lot of rows, this can take forever.

The question is: if I have a select in one transaction that is suffering from 
the cleaning up after a deletion of say 1 million rows (and it's taking forever 
to return results), will another transaction that just writes a new row in the 
same table be also delayed? Writing speed in that table is critical, reading is 
important but not critical.

Of course I am going to do tests inhouse, but I would really appreciate an 
answer from the point of view of the inside work of Firebird.

Thanks a lot in advance,

Fernando Nájera

Reply via email to