I think my conclusions were that it is nothing to do with conditions
or indexes or anything, the bottleneck is that when deleting a row,
the complete row is read, and if that row contains a column with a
large array, the reading of the array is the bottleneck (even though
it is about to be deleted).

Is this conclusion correct? If so, will it be any time soon that this
behaviour (redundant reading of columns) is changed? I didn;t see it
on any road map.

Finally, how do i send donations to support the work of this great
database!

On May 18, 9:39 am, Thomas Mueller <[email protected]>
wrote:
> Hi,
>
> Generally, you should try not to use conditions, and an index on very
> large column (for example a very large text column, or an array column
> with 3000 values). If you need to delete multiple rows, you could try:
>
> delete from test where id in (select x from table(x int=?))
>
> Regards,
> Thomas

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.

Reply via email to