Yes, databases are shutdown and reopened constantly since the connection
pool will only keep unused connections around for a short period of time
before closing them. As a result databases are closed by H2 automatically
once the last connection is closed. This particular type of issue appears
On 2019/05/21 9:52 AM, christoff.schm...@finaris.de wrote:
I tracked this down to the *rebuildIndexBlockMerge *method of the *MVTable
*class (see below).
As I saw that the *MAX_MEMORY_ROWS *parameter is used in the method, I changed
its values and tried again.
With set to 1000 the index
Unfortunately we currently have another case of database corruption. The
exception stack trace is identical to the one I posted above. What strikes
me is that the messages reads "database: flush" and subsequently the stack
trace seems to imply the database is being opened. That sounds
The database URLs currently look like:
jdbc:h2:/path/to/file;IGNORECASE=TRUE;MVCC=TRUE;LOCK_TIMEOUT=1
I have experimented with different FILE_LOCK settings but since the
databases are accessed from one single process this is unlikely to have any
influence. I could probably do without
Thank you, that is an excellent suggestion. The times are very short now
but there is no pressing reason for that other than hoping it would reduce
resource consumption. I will change that ASAP.
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
On 2019/05/21 4:21 PM, Silvio wrote:
The database URLs currently look like:
None of that sounds problematic. You might be able to work around this issue simply by telling your pool to keep
connections alive for longer, which will reduce the number of startup/shutdown cycles each database
Additionally, large write queues are very common in our system. At least
one of the most recent corruptions occurred when a user performed an action
that caused a new table to be created and ~200K records to be inserted into
it (using separate connect/insert/disconnect cycles, pooled of
Hi Noel,
our users typically deal with huge amounts of data, which often do not fit
into memory. Tables might have hundreds of columns, so that already a low
amount of rows held in memory can occupy a lot of it.
Additionally, queries are often issued in parallel, therefore a low value
was set
Sorry for spamming but I am just thinking out loud here:
Suppose we do leak connections across threads and multiple writes and/or
closes are performed on the same connections. Could that cause database
corruption? I know it is a hard call but does it sound likely or even
possible that changes
What does one of your database URLs look like?
Are you perhaps call Thread.interrupt somewhere?
If so, that would close the database abruptly and force it to be re-opened on
the next connection attempt.
Doing that enough times is likely to cause corruption, because all kinds of stuff could be
If you are trying to prevent users from exceeding memory resources, your
best bet is just to use a connection pool and limit the max number of
connections.
Note that even if a user issues multiple queries in parallel to the same
connection, those queries will execute sequentially server-side.
I
11 matches
Mail list logo