One of our H2 databases (MVStore cached tables, h2-1.4.200, embedded mode, 
JDK13, dedicated Linux server with 8G RAM and 4 CPUs) is about 1.8G in 
size. The database is both queried and updated quite frequently by many 
application users at the same time. The database does grow but at a very 
slow pace of perhaps 20% annually.

The last couple of months the performance of the application decreased 
significantly. During high load CPU would increase to ~400% with load 
values (top) of up to 20. But even during almost no load (at night when 
nobody is using the application) CPU would be at 200%.

I was able to eliminate all application background processes as the 
possible cause so I decided to do a complete SQL dump/recreate of the 
database. The result was stunning: CPU during high load averages around 
100% and it will drop to 0% when things quit down.

Can you explain this? Is it a general rule that database performance will 
degrade over time regardless of growth factor and that it is advisable to 
dump/restore periodically? If so, are there any metrics that can identify 
if/when such a step would be called for?

Thanks for any insights.

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/h2-database/bc18e6c3-e023-47a7-a415-1f9766e09b44%40googlegroups.com.

Reply via email to