On 3/12/2024 10:28 AM, Ulrich wrote:

My database has around 50.000 tables storing several 100 millions of rows. Its a kind of time series database. Data is added continously (around 1500 rows per second) and data older than 30 days is removed once a day.


I suggest (if you are not doing this already) that you move to an architecture 
where instead of running DELETE when
removing old data, you can run DROP TABLE or TRUNCATE TABLE, which will be more 
efficient with H2.

As long as the automatic compaction did not show nice results I decided to switch off the automatic compaction, set MAX_COMPACT_TIME to 30000 and shutdown the database each 5 minutes using SHUTDOWN. I use SHUTDOWN instead of SHUTDOWN COMPACT to get control over the maximum time while the db is not available.


Unfortunately SHUTDOWN just does not try very hard, if you want to reduce disk 
space you will need to use SHUTDOWN COMPACT.

H2 is unfortunately not a great match for your specific use-case, and I don't think there is really anything in the way of database parameters that will make a big difference.

You could try batching your inserts (i.e. inserting a bunch of rows before doing a COMMIT), that sometimes helps reduce the disk usage.

Regards, Noel.

--
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/h2-database/2fb9b248-f01c-4812-bb2c-e4f49de96127%40gmail.com.

Reply via email to