Hello Thomas,

In addition to the potential data corruption described in the previous
message we noticed that the log flush is very inefficient.
Specifically it writes out every block of 128 bytes as a separate file
write. On slow files systems (flash) this causes a ton of unnecessary
overhead. We have seen the database open times of 20 minutes on large
databases with flash file systems. We have implemented an experimental
change to batch up the deletes that are done for contiguous record
IDs. It was easy since the delete operation just fills the block in
the file with zeroes. In our experiments the recovery performance was
increased by a factor of 3. We assume that we can get even better
improvement by increasing the REDO_BUFFER_SIZE. Could you take a look
see if it's possible to optimize the file writes (bigger blocks
written in one write, rather than a lot of small writes) during the
database recovery process ? We would be glad to share our experimental
modifications with you.

Thank you,
Alex

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to