Something does not click here:
If you have "a table with millions of rows and an assumed column width of 
4MB of data", then how it is possible, that
"with set to 1000 the index creation still required a maximum heap of about 
800M, but the OOM Error did not occur anymore" ?
Your thousand rows should take at least 4G of RAM, not 800M.

IMHO, index creation for a big table is an administrative task, presumably 
performed on idle (if not exclusively held) database,
so what would prevent you from opening db with MAX_MEMORY_ROWS of lets say 
3000 (assuming 4g heap), creating index, then restart database
with you favorite 10, for application to use?

On the other hand, we probably should select buffer size as SQRT(ROWCOUNT) 
and if it exeeds MAX_MEMORY_ROWS/2, then just fall back to plain vanilla r
ebuildIndexBuffered().
It might  take forever and will trash b-tree, but at least should not fail 
with OOM.

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/h2-database.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/h2-database/212abf4b-e484-4253-92e2-5799166b1ce0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to