Hi Brish, It's good that you are critical, however your statements are a bit too general.
> Memory leaks can cause data loss. This problem has been fixed in version 1.0.75 (2008-07-14). See http://www.h2database.com/html/changelog.html "Running out of memory could result in incomplete transactions or corrupted databases." If this still is a problem for you, please post a test case. > H2 isn't durable. While in most cases you won't lose your data now (a > year ago there were a lot of corruption problems) this can be an issue > for mission critical data. Durability is relative. I have tested durability with Derby, MySQL, PostgreSQL, HSQLDB, and regular file system operations. On my hardware, no database was durable. Also regular file system operations were not durable. Fsync was not durable. The only way to guarantee durability is "wait one second after each write operation". I don't think this is an acceptable solution. See http://www.h2database.com/html/advanced.html#durability_problems for details. I agree with the following conclusion "[given the current hardware] you can't make a system that will not lose data, you can only make a system that knows the last save point of 100% integrity. There are too many variables and too much randomness on a cold hard power failure." > H2 has performance problems with large databases because of poor index > use. Again, this is relative. While the optimizer of H2 is not as good as for MySQL and PostgreSQL, it is much better than that of HSQLDB, and about as good as the one from Derby. Sometimes H2 is better (for example SELECT COUNT(*) is fast in H2, while it is slow in Derby), sometimes worse. > H2 has problems with large transactions (can use memory issues causing > data to be lost). 'causing data loss' is incorrect, but yes, H2 has problems with large transaction up to some point. It depends on how you define 'large', how much memory is available, and if you use tables without unique indexes. > H2 has problems with large queries (if the number of results is below > a threshold all data is kept in ram which can cause memory issues > which can result in data loss, if the number of results is above a > threshold it writes it to disk which is really slow). I don't know how you relate 'data loss' with 'queries'. In H2, select statements can not cause data loss. It is true however that large result sets are buffered to disk. > In H2 an insert/update/delete locks the table so no other connections > can access the table until the table lock is freed via commit, or > rollback. I think this issue makes h2 unsuitable for a server. You may not have read about the multi-version concurrency (MVCC) feature of H2. By default MVCC is not used, and table level locks are used, that's true. But that doesn't mean it is "unsuitable for a server". > H2 has one big synchronized lock on the database so you can only run > one query at a time. There is a multithreaded mode but it's unstable > currently. The multithreaded mode does needs more testing, and it is currently disabled for the MVCC mode. Currently, the main use case for H2 is embedded, unit testing, demo applications, and server usage with relatively few concurrent clients (let's say less than 20 or so; of course it depends on how many statements run concurrently). Once H2 is competitive in that market (and it seems it is), then the server mode is improved. Regards, Thomas --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "H2 Database" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/h2-database?hl=en -~----------~----~----~----~------~----~------~--~---
