Hi all,

I have a rather unique setup in my software where I'm running a separate h2
instance for each of my threads. I'm currently running 8 of them, 2 for
each vcpu core. Each of these threads has exclusive access to these
databases, so they are nice and fast. Because there is no parallel access,
and because I'm very rarely updating anything in these databases, MVCC is
not helpful and was measured to be about half as performant as pagestore.

So far so good. My problems started when I tried scaling these databases to
multiple gigabytes each. As soon as I've done that, it started corrupting,
unless I used "nioMapped". And even then, when I open any one of these
databases, they are always read into memory fully, even if I put "file" in
the URL. So, not just in my software, but if I open a split database of 2
gigs in a SQL console application like squirrel, it grows the amount of
used memory by 2 gigabytes or throws an out of memory error.

Is there anything obvious I'm doing wrong? I understand that this is not
something most people do, but I read through what I could before posting
here and couldn't find anything that says anything in doing is wrong.

Any help would be very much appreciated.

Thanks,
Andras Gerlits

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/h2-database/CAHSv1fC02HtWNUT7zauag%2BJ0mF%2B4XDygO4AGODdTGD_w-LC%3Drg%40mail.gmail.com.

Reply via email to