I have an H2 database where, after loading the data, the database grows to 
24GB.

If I dump & recreate the database, this shrinks to 1.5GB - but obviously 
this takes time to do.

I'd like to try to reduce the size of the database a little - I'm not 
necessarily looking at compression or anything, but I'm assuming some of 
this 20x overhead is checkpoints etc.

I also know that by default H2 spends ~500ms compacting the database, but 
I'd rather not leave this work until the end.

Is it possible to run this compaction task in the background on an ongoing 
basis, like other databases do?

Thanks in advance.

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/h2-database/fe4d9de3-500c-4d0e-995d-4fe3b706c10b%40googlegroups.com.

Reply via email to