Hi, I have a test case now. In my case an out of memory error occurs (I use java -Xmx3m to enforce that), then the database tries to roll back, then another out of memory exception occurs, and the rollback is only partial. That's not good of course... My plan is to close the database if an out of memory occurs during the rollback. I'm not sure if there is a better solution.
I am still testing my solution, it will be included in the next release. The workaround is to not delete so many rows in one step. The long term solution in the database is to better support large transactions, but that will take some more time. Regards, Thomas On Mon, Oct 13, 2008 at 6:16 PM, Mandres <[EMAIL PROTECTED]> wrote: > > Hi Thomas !. > > These are my anwsers : >>>- What is your database URL? > jdbc:h2:c:\Historico\bd\anexoh;IFEXISTS=TRUE > >>> What version H2 are you using? > H2 1.0.74 (2008-06-21) > >>>- With which version of H2 was this database created? > NAME - VALUE > ---------------------------------------------- > CREATE_BUILD 67 > >>>- Did you use multiple connections? > No. > >>>- Do you use any settings or special features (for example, the setting >>>LOG=0, > or two phase commit, linked tables, cache settings)? > No. > >>>- Is the application multi-threaded? > No. > >>>- On what operating system, file system, and virtual machine (java -version)? > Windows XP SP3, NTFS, Java Version 1.6.0_07 > >>>- How big is the database (file sizes)? > Total size = 3,30 GB > >>>- Is it possible to reproduce this problem using a fresh database > (sometimes, or always)? > I do not know with this. > >>>-Are there any other exceptions (maybe in the .trace.db file)?Could you send >>>them please? > I do not know. Of course > >>>-Do you still have any .trace.db files, and if yes could you send them > I still having a .trace.db files after closing the connection. It is > normal?. > > Thanks!!! > > > On 11 oct, 07:23, "Thomas Mueller" <[EMAIL PROTECTED]> > wrote: >> Hi, >> >> > I use H2 with an embeded app, I have a table with 6.000.000, I tried >> > to delete 300.000 but a got a exception "Java Heap Space", I tried >> > again deleting records by criteria, and this solve my problem... >> > BUT, when I make a query for data that a deleted, its show result with >> > data that already deleted. >> > I run again my delete query and this show me this exception : >> > General error: java.lang.RuntimeException: File ID mismatch got=0 >> > expected=47 pos=12838178 true org.h2.store.DiskFile:C:\Historico\bd >> > \anexoh.data.db blockCount:0 [50000-78] HY000/50000 >> >> This sounds like a bug. Unfortunately I can't reproduce the problem so >> far. I have a few questions: >> >> I am very interested in analyzing and solving this problem. Corruption >> problems have top priority for me. I have a few question: >> >> - What is your database URL? >> - What version H2 are you using? >> - With which version of H2 was this database created? >> You can find it out using: >> select * from information_schema.settings where name='CREATE_BUILD' >> - Did you use multiple connections? >> - Do you use any settings or special features (for example, the setting >> LOG=0, >> or two phase commit, linked tables, cache settings)? >> - Is the application multi-threaded? >> - On what operating system, file system, and virtual machine (java -version)? >> - How big is the database (file sizes)? >> - Is it possible to reproduce this problem using a fresh database >> (sometimes, or always)? >> - Are there any other exceptions (maybe in the .trace.db file)? >> Could you send them please? >> - Do you still have any .trace.db files, and if yes could you send them? >> >> Regards, >> Thomas > > > --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "H2 Database" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/h2-database?hl=en -~----------~----~----~----~------~----~------~--~---
