Thomas,

That's excellent detective work; I was despairing of replicating it.

I'll move the log size up significantly and let you know if that let's
me get to the current version.

Thanks a lot!

Chris

On Mar 11, 3:16 pm, Thomas Mueller <[email protected]>
wrote:
> Hi,
>
> I can now reproduce the problem. The database writes many checkpoints
> unnecessarily, which slows down the operation. It does that because
> the the log "file" (no it's no longer a file, it's a segment) is too
> large. If there is an open transaction, it can't delete the old log
> segment however, so that it will create a new segment for each 32 (by
> default) sequences. I will fix that in the next release.
>
> A workaround is to use a larger max_log_size or smaller transactions.
>
> Regards,
> Thomas

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.

Reply via email to