Hi, we have a database with one table, 800'000 rows, about 50 columns and one index per column. We currently have a problem with the time it takes to create all indexes - about 10min. The database size when all indexes created is about 1.5Gb. We've profiled our application with jvisualvm profiler and found that noticable amount of time is spent on IO (reading pages), and parsing raw data because the database doesn't fit into the cache. We tried to set cache size up to 3Gb and it still seems too small. CACHE_TYPE=TQ has no effect.
We've tried to investigate it deeper and it looks like memory esimates are wrong for our data set. jvisualvm shows that retained size for a PageStore object is 403'152'540, a CacheLRU retained size is 33'554'517. At the same time the cache.memory size is 204'787'828 (i.e. about 800Mb) and cache.maxMemory is 204'800'000. We have found a PageDataLeaf with `memoryEstimated` field = 14'394 (x4 bytes), `rows` field retained size is 7203 bytes (16 rows of 18 are initialized). Is that esimate ok if the page size is 16'384? There are a lot of objects which satisfy the query "select d from org.h2.index.PageDataLeaf d where rsizeof(d.rows)+16443+100 < d.memoryEstimated*4". Is it possible to tune how the size is esimated? Or probably to set a constant. And also is it possible to lazily initialize fields of rows. In our case to create every index only for one field of a row is required, and because it doesn't leverage the cache it just wastes CPU by parsing the whole row. -- You received this message because you are subscribed to the Google Groups "H2 Database" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/h2-database?hl=en.
