> I have the following situation. A table is used to store values that are > generated quite frequently. A second process will read the values and do > something with them. After that processing the read values are no longer > needed and they can (should!) be deleted from the database. However, the > second process might fail (if the network is not available for example) and > in this case the values pile up in the table (real situation: 2 GB of data in > that single table after a network outage of 2 days). Eventually the process > will work again and those values will be processed - and deleted. > > And that is where my problem is. Firebird has a thing with bulk deletions - > the next time you do a select from the table the deleted records seem to be > "cleaned up" (don't know the technical expression), and if you have deleted a > lot of rows, this can take forever. > > The question is: if I have a select in one transaction that is suffering from > the cleaning up after a deletion of say 1 million rows (and it's taking > forever to return results), will another transaction that just writes a new > row in the same table be also delayed? Writing speed in that table is > critical, reading is important but not critical. > > Of course I am going to do tests inhouse, but I would really appreciate an > answer from the point of view of the inside work of Firebird.
Perhaps your read process couldn't delete the record but mark it with a flag and a cleanup process scheduled e.g. in the night deletes the marked records followed by initiating a manual SELECT COUNT(*) on that table in case of co-operative garbage collection. Btw, in respect to client transaction management. Are you using hard commits or soft commits aka commit retaining? -- With regards, Thomas Steinmaurer * Firebird Foundation Committee Member http://www.firebirdsql.org/en/firebird-foundation/ * Upscene Productions - Database Tools for Developers http://www.upscene.com/ * My Blog http://blog.upscene.com/thomas/index.php
