Hi all,

I originally created a single table database that must store 32GB of
sensory data (each row is a few ints).  The TWO main concerns are that
the database uses very little memory (under 200MB is ideal) and that
the database be very fast with its inserts.

I quickly found out that after the database grew to 5GB, with indexing
over a single attribute, the database slowed to < 5 writes/
milliseconds from roughly 70 writes/millseconds on a clean start and
seemed to continue slowing as the database grew.

I am trying to alleviate the problem through partitioning over the
data's domain.  So there's an attribute that only takes maybe 36
values.  I attempted to create 36 different tables for each value the
attribute can possibly take.  Anyway,  I am getting a out of heap
memory error during my 10 million row write test.

I assume the problem deals with the fact that I now have 36 tables and
36 index files sitting in memory.  Each of the 36 tables has a 1MB
cache.  However, on the same 10 million row write test over a single
table w/ indexing and a 50MB cache did not give me problems.  My JVM's
memory is set to 100MB max.  It seems that I would use more memory in
my single table case (50MB cache) than in my 36 table case (36*1MB =
36MB cache).

I need help understanding how the tables are stored, and if there's a
way to properly partition my data because it's not scaling very well.

Please help - any help is greatly appreciated.

Thanks,
Julian
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to