We have a horrific performance issue with a table of 13 rows, each one
containing a very small blob, because the table is presumably full of dead rows
and we are table-scanning; here's part of the explain plan:
Source result set:
Table Scan ResultSet for SOMETABLE at read
committed isolation level using instantaneous share row locking chosen by the
optimizer
Number of columns fetched=4
Number of pages visited=8546
Number of rows qualified=13
Number of rows visited=85040
optimizer estimated cost:
787747.94
So I assume I have over 85,000 dead rows in the table, and compressing it does
not reclaim the space. In fact, because we keep adding and deleting rows, the
performance gets worse by the hour, and according to the above plan, Derby has
processed over 32MB of data just to match 4 of the 13 rows. For the time being,
I want to optimize this table scan before I resort to indices and/or reusing
rows. This is with Derby 10.3
Any thoughts?
Thanks