Interesting; I didn't know about memFS and memLZF. Unfortunately those two 
options don't work, I get an exception in the middle of importing my second 
csv file (database ver 1.4.187)


General error: "java.lang.ArrayIndexOutOfBoundsException: -2095880" 
[50000-187] HY000/50000
org.h2.jdbc.JdbcSQLException: General error: 
"java.lang.ArrayIndexOutOfBoundsException: -2095880" [50000-187]
    at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
    at org.h2.message.DbException.get(DbException.java:168)
    at org.h2.message.DbException.convert(DbException.java:295)
    at org.h2.message.DbException.toSQLException(DbException.java:268)
    at org.h2.message.TraceObject.logAndConvert(TraceObject.java:352)
    at org.h2.jdbc.JdbcStatement.execute(JdbcStatement.java:160)
    at org.h2.server.web.WebApp.getResult(WebApp.java:1390)
    at org.h2.server.web.WebApp.query(WebApp.java:1063)
    at org.h2.server.web.WebApp$1.next(WebApp.java:1025)
    at org.h2.server.web.WebApp$1.next(WebApp.java:1012)
    at org.h2.server.web.WebThread.process(WebThread.java:168)
    at org.h2.server.web.WebThread.run(WebThread.java:93)
    at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.ArrayIndexOutOfBoundsException: -2095880
    at org.h2.store.fs.FileMemData.expand(FilePathMem.java:513)
    at org.h2.store.fs.FileMemData.readWrite(FilePathMem.java:622)
    at org.h2.store.fs.FileMem.read(FilePathMem.java:309)
    at org.h2.store.fs.FileBase.read(FileBase.java:41)
    at 
org.h2.mvstore.cache.FilePathCache$FileCache.read(FilePathCache.java:81)
    at org.h2.mvstore.DataUtils.readFully(DataUtils.java:429)
    at org.h2.mvstore.FileStore.readFully(FileStore.java:98)
    at org.h2.mvstore.Page.read(Page.java:191)
    at org.h2.mvstore.MVStore.readPage(MVStore.java:1843)
    at org.h2.mvstore.MVMap.readPage(MVMap.java:736)
    at org.h2.mvstore.Page.getChildPage(Page.java:218)
    at org.h2.mvstore.Cursor.min(Cursor.java:129)
    at org.h2.mvstore.Cursor.hasNext(Cursor.java:36)
    at org.h2.mvstore.MVStore.collectReferencedChunks(MVStore.java:1214)
    at org.h2.mvstore.MVStore.freeUnusedChunks(MVStore.java:1183)
    at org.h2.mvstore.MVStore.storeNowTry(MVStore.java:981)
    at org.h2.mvstore.MVStore.storeNow(MVStore.java:973)
    at org.h2.mvstore.MVStore.commitAndSave(MVStore.java:962)
    at org.h2.mvstore.MVStore.beforeWrite(MVStore.java:2097)
    at org.h2.mvstore.MVMap.beforeWrite(MVMap.java:1046)
    at org.h2.mvstore.MVMap.put(MVMap.java:117)
    at org.h2.mvstore.db.TransactionStore.commit(TransactionStore.java:358)
    at 
org.h2.mvstore.db.TransactionStore$Transaction.commit(TransactionStore.java:779)
    at org.h2.engine.Session.commit(Session.java:507)
    at org.h2.command.Command.stop(Command.java:152)
    at org.h2.command.Command.executeUpdate(Command.java:284)
    at org.h2.jdbc.JdbcStatement.executeInternal(JdbcStatement.java:184)
    at org.h2.jdbc.JdbcStatement.execute(JdbcStatement.java:158)
    ... 7 more 

The same data loads successfully in both mem and file based modes.

> One option would be to use a "file system in memory", for example 
"jdbc:h2:memFS:test" or "jdbc:h2:memLZF:test". This is slower than pure in 
memory, but needs less heap memory. You can use defrag and so on there.


Can you explain how to defrag in either mem, memFS, or memLZF modes? The 
only way I'm aware of to defrag and cleanup dead data is to execute a 
"shutdown defrag" command, which wouldn't be an option with mem based 
databases. The roadmap seems to backup my understanding:

http://h2database.com/html/roadmap.html?highlight=defragment&search=defrag#firstFound

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.

Reply via email to