Hi,
It is not really a question, rather an experience summary with H2 (we are
using H2 1.4.190).
We had a business need for a huge memory cache (talking about 110M
records), for a faster query option in our statistical queries.
Finally, we have decided to use H2 instead of caching (with this option we
have to modify less existing code, so we can go towards with our existing
JDBC Pooling handlers).
So basically, the cached records are simple ones with 1 long and 6 integer
columns (so 32 bytes for each record).
Not talking about the load time (in a simple server, the copy from the
existing MS SQL db is about 2-4 hours, depending on the current load), here
are our results:
1. 110M records in a in memory H2 db is about 35-36GBs of memory
2. Simple queries are extremely fast (thanks Thomas!)
1. like select count(*) is 1ms
2. selecting records and counting them by integer ranges are a
maximum of 35 seconds (without indexes wow - it is not really faster with
indexes on a normal SQL Server)
3. Due to the storage mechanics, memory usage is not linear with the
record count
1. like 10M records was about 9GBs of memory
2. 25M records were about 21GBs of memory
3. 110M records were 35-36GBs of memory
4. HASH index creation (after the table was filled) killed the server
1. => create the hash index before populating the server
Hope I could help everyone with these data (If I have anymore to add, I
will).
Also, if it is possible, we would like to make some personal contact with
Thomas (couldn't find your mail, just this mailing list) - so both of us
could learn from handling bigger inmemory DBs :)
Thanks,
Csaba Sarkadi
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/d/optout.