Indeed. There's a lot to wonder in this world.
I have been always wondering why H2 supports array data structure at all
that does not nicely fit into the relational model.
- Rami
On 26.5.2012 23:44, essence wrote:
OK, thanks for clarifying, the problem is indeed disc access, and the
longer the row the fewer rows are on a page, I guess, so effectively
long arrays will slow down deletion because more page reads need to be
performed.
I have only pursued this as a counterpart to all the advice about IN.
Getting rid of IN does not always help!
Interesting that you keep the array data on the same page as the rest
of the row, I assume? I thought some databases put this stuff (and
blobs etc.) somewhere different, maybe for this same reason.
On May 21, 6:36 pm, Thomas Mueller<[email protected]>
wrote:
Hi,
the bottleneck is that when deleting a row,
the complete row is read, and if that row contains a column with a
large array, the reading of the array is the bottleneck (even though
it is about to be deleted).
Yes. However I don't think converting the byte array to a row is the
problem. I guess the problem is that the page is read (from disk). This is
unavoidable unless if you delete all rows within a page.
What you could do is delete all rows of a table, using "truncate" or "drop
table".
Regards,
Thomas
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/h2-database?hl=en.