Hi all!

I've been using H2 for years now and I'm very happy with it. Its speed and 
ease of use are usually unmatched (especially in that combination).

One problem I have repeatedly faced is the performance for large updates. I 
have several tables with >100,000 rows and about 10 columns (H2 1.4.197, 
database about 5GB). If I want to update one column for a large number of 
rows, the process takes unfathomably long to complete. In fact, it is much 
faster doing it manually with an updateable ResultSet (*several orders of 
magnitude faster*). This really is a pity, because H2 is otherwise a great 
database engine.

I've tried several things so far:

   - SET CACHE_SIZE 2097152;
   - SET MAX_OPERATION_MEMORY 0;
   - restart
   - index + analyze (not sure if indexes are useful because the 
   statement's where clause selects on a boolean field---even if they were, 
   the update takes several times longer than a full scan & export, and this 
   just can't be right, can it?)

I've used EXPLAIN ANALYZE UPDATE ... and that, too, takes ages (I 
understand EXPLAIN ANALYZE is run before the update statement anyway?).

So what am I doing wrong? Why does the UPDATE (or the preceding EXPLAIN 
ANALYZE) take several orders of magnitude longer than a full scan? It 
almost seems like the time needed to complete the UPDATE grows 
exponentially in the number of rows involved.

Thanks and best wishes,
Philipp

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.

Reply via email to