The problem is that the database is growing much more than it should
(as if there was an uncommitted, long-running transaction - but there
isn't any). This is accompanied by "pageStore: Transaction log could
not be truncated" warning in *.trace.db. The problem disappears if I
manually force CHECKPO
The DB is created. Tables, sequences and indexes are created all with
'IF NOT EXISTS' phrase.
The code for DB opening shares some procedures with the code that
creates a DB, esp. the part that creates the aforementioned tables,
sequences and indexes. If the DB is opened in read-only mode then:
1)
I did a similar experiment. My table consists of 3 columns (BIGINT NOT
NULL DEFAULT NEXT VALUE FOR GLOBAL_SEQ PRIMARY KEY, BLOB, BLOB) an
currrently has over 11*10^6 rows (total DB size is over 17 GB).
Dropping of this table was ugly slow - I interrupted it after 15
minutes, cannot afford waiting f
"What you could do is create the large table in a separate database,
and
link it using "create linked table"."
Is two-phase commit working on linked tables? If not then at least for
me this workaround is no-go.
--
You received this message because you are subscribed to the Google Groups "H2
Dat
I definitely second the idea of having byte[] as keys/values. It's as
universal as it could be.
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To view this discussion on the web visit
https://groups.google.com/d/msg/h2-database/-/m5ZDR2XgFCcJ.
o solve the problem on my own but if any of you
can drop me an ad hoc idea what should I try I'd be grateful.
Regards,
wburzyns
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To post to this group, send email to h2-database@g
(total time: 25 seconds)
Regards,
wburzyns
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to
h2-database+unsubscr...@googlegroup
the table/DB grows bigger.
The only issue observed is the closing time - it looks like it
linearly depends on the number of records in the DB (approx. 2,9 s per
million of records on my machine).
Greets,
wburzyns
--
You received this message because you are subscribed to the Google Groups "H
The results are obtained using H2 v. 1.3.153.
For the purpose of this topic by "scalability" I mean the ability of a
single DB instance to "handle growing amounts of work in a graceful
manner".
Regards,
wburzyns
--
You received this message because you are subscribed to t
e splitting, IMO. However I'd love to hear how
it behaves with a few hundreds of millions of rows :)
Regards,
wburzyns
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To post to this group, send email to h2-database@googlegroups.co
up/h2-database/browse_thread/thread/d463fc4aed1e56b3/a8eba6d11ebfc931?lnk=gst&q=db+closing+slow#a8eba6d11ebfc931
Regards,
wburzyns
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To post to this group, send email to h2-database@g
t
enough, there should be possibility to specify their offset as well...
Regards,
wburzyns
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group,
ing to re-evaluate my
earlier decision to use BLOB instead of BINARY. I'll let you guys know
my findings.
Bests,
wburzyns
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To post to this group, send email to h2-database@googlegroups
;d like to stay with BLOB.
wburzyns
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe from this group, send email to
h2-database+unsubscr...@googlegro
ot;If it is possible that
the objects don't fit into memory, then the data type CLOB (for
textual data) or BLOB (for binary data) should be used. For these data
types, the objects are not fully read into memory, by using streams."
wburzyns
--
You received this message because you are s
Hi Thomas,
With the latest version (1.3.154) the issue doesn't occur anymore - DB
closing is almost instant (less than 1s). There is also no noticeable
performance drop with regard to inserts - I must have screwed up
something with the index I created on my own.
Many thanks,
wburzyns
--
To everybody interested: as of H2 v. 1.3.154 the long close problem
doesn't occur anymore.
wburzyns
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubs
http://www.h2database.com/javadoc/org/h2/jdbcx/JdbcDataSource.html#getURL
will do the job.
wburzyns
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To post to this group, send email to h2-database@googlegroups.com.
To unsubscribe
Should be fine. See here:
https://groups.google.com/group/h2-database/browse_thread/thread/7ba7a652750e3496
wburzyns
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To post to this group, send email to h2-database@googlegrou
"store about 1Million rows in H2"
No problemo. I'm storing tens of millions and H2 scales very well
(however I do only inserts and retrieval by PK).
On Dec 4, 7:53 pm, Viji wrote:
> Hi,
> We are trying to store about 1Million rows in H2 database tables.
> Can anyone please advise scalability in
As of h2-1.3.162 if I do (substitute a non-negative integer for N,
e.g. 5):
CREATE SEQUENCE IF NOT EXISTS A_SEQ_NAME START WITH N
then the created sequence starts with a value that is not N but N - 1;
see a dump from H2 web console on jdbc:h2:mem:test below:
-
CREATE SEQUENCE IF NOT EXISTS A_
r you'll get
> the next time you ask the sequence for a number. You ask a sequence for
> a number using NEXTVAL(), not CURRVAL() which is why you're getting N-1.
>
> Gili
>
> On 15/02/2012 5:04 PM, wburzyns wrote:
>
>
>
>
>
>
>
> > As of h2-1.3.1
This bug is a bit scary as I also have indices on not-so-small varchar
columns.
Thomas, can we know when do you plan to release the next H2 version?
Thanks,
wburzyns
On Apr 3, 9:28 pm, Thomas Mueller
wrote:
> Hi,
>
> Thanks a lot for the great test case! I can reproduce the problem,
According to my own measurements made almost a year ago for the
aforementioned data size, BLOB type offers slightly better performance
(a few percent) over BINARY. So if you don't need index on this column
I'd go with BLOB.
Regards,
wburzyns
On Apr 4, 1:55 pm, Noel Grandin wrote:
&g
could not be truncated" warning in *.trace.db.
Temporarily as a workaround I explicitly do CHECKPOINT after
committing/rolling back a prepared transaction. This resolves the
issue but I believe that checkpointing should be automatic.
Regards,
wburzyns
--
You received this message
Are you using two-phase commit? If yes then have a look at this post:
https://groups.google.com/group/h2-database/browse_thread/thread/2fb61bd17fb8dad9
wburzyns
On Apr 22, 5:52 pm, Noel Grandin wrote:
> You probably have an open transaction that causes the transaction log
> to grow an
Please see below for a standalone testcase. However after playing with
it I no longer claim that the issue is related to the two-phase
commit...
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.Savepoint;
import java.sql.Statement;
import java.util.Random;
import
disappear.
I'm attaching a standalone testcase.
Regards,
wburzyns
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to h2-database+unsubscr...@googl
I didn't know that the MAX_LOG_SIZE has no effect with MVStore. The
documentation only states that "This setting has no effect for in-memory
databases.". I set the MAX_LOG_SIZE to 1 MB in the testcase to ensure it
gets truncated quickly, just to shorten the time/space requirement for the
testca
I really appreciate your quick response!
I re-run the test against HEAD of master. The fix resolves the issue only
for MVStore. PageStore-backed DBs are still ballooning (just append
";MV_STORE=false" to the URL of the test database).
--
You received this message because you are subscribed to
There is yet another bug related to LOB handling, present in both 1.4.196
and master/HEAD. Namely, with the MVStore, just reading LOBs from the DB
causes it to grow. This behavior virtually rules out any long-term usage of
H2/MVStore with LOBs as the DB grows absurdly.
I'm attaching a standalon
Here are conclusions after testing different combinations of backing
engines and datatypes:
1) MVStore in master/HEAD is better than in version 1.4.196.
2) The only viable solution, when in comes to storing binary data in H2 is
MVStore and BINARY data type - if one can go without streaming. If y
Noel, can you elaborate a bit why reading LOBs results in writes to the
database? Is it because the LOB content is copied somewhere in the DB so
that the client does not need to keep transaction open to access LOB
content? If that's the reason then I don't see how the ballooning could be
preven
>That is exactly it. We delete the LOBs on session close and/or after a
configurable timeout.<
>
>
>
>From my testcase's point of view, neither is happening. Regardless of
LOB_TIMEOUT set and repeated DB reopens, DB files grow in size as LOBs are
read.
I understand that technically you might b
d for more details.
Regards,
wburzyns
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to h2-database+unsubscr...@googlegroups.com.
To post to this group, s
Every time I post about a bug in H2, somebody is trying to convince me that
there is no bug and that I'm doing something wrong. Nice to see this time
is no different. This backward compatibility is important to me ;)
>From user-code perspective there is no difference between "COMMIT" and
"COMMI
I did not decide to skip anything. Let me number the bullets from
http://www.h2database.com/html/advanced.html#two_phase_commit:
1) Autocommit needs to be switched off
2) A transaction is started, for example by inserting a row
3) The transaction is marked 'prepared' by executing the SQL statemen
Thanks. I tested your fix (both in the testcase and in my prod. env.) and
it seems that the issue is resolved.
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to h2
+1
It's about time. Quite a few important issues have been fixed since the
last release.
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to h2-database+unsubscr...
Javadoc seems to be complete. Since DB upgrade issue
(https://github.com/h2database/h2database/issues/1592) has been resolved,
what is preventing you guys from doing an official release?
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To unsubs
Hello,
It looks like there is an intermittent bug lurking in 1.4.200. This occured
on a freshly created database during data import performed by a
higher-level routine (i.e. not a H2's RUNSCRIPT). The progress of the
import at the time of crash was around 3.5%:
email.com.gmail.wburzyns
I'll give it a try but I don't think that the split filesystem is the
culprit here - exactly the same code that crashes with 200 works perfectly
OK with 199.
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To unsubscribe from this group and sto
It looks like you guys are correct. With "split:" removed from the DB URL I
was unable to reproduce the crash.
Interesting observation is that one of the crashes that occurred previously
happened when the total DB size was way below the size limit for single
file, i.e. there was only one block
t currently the only way is to
change Constants.MAX_STRING_LENGTH in the source code. Would it be possible
for the H2 to have an URL option for this?
Regards,
wburzyns
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To unsubscribe
As for current master, both 'PreparedStatement.setBytes(..., ...)' and
'PreparedStatement.setBinaryStream(..., ...)' fail with 'Value too long for
column "BINARY VARYING"' exception. This happens regardless of whether the
column is declared as VARBINARY or BLOB.
On Monday, August 31, 2020 at 6:
You're right, I didn't notice that the exception was coming from
PreparedStatement.getBytest() later in my code, which I overlooked to adapt
to recent changes in H2. Sorry for bothering you.
BTW Are BLOBs stored differently in H2 (as compared to VARBINARY)? Is there
a performance difference bet
Do all transactional protections (including those provided by two-phase
commit protocol) apply to BLOBs or are they second-class citizens?
On Wednesday, September 2, 2020 at 4:34:41 AM UTC+2 Evgenij Ryazanov wrote:
> Yes, LOB values are stored separately in H2 and in many other DBMS, unlike
> o
I'm on 1.4.199 and I have tens of millions opaque data pieces in a DB
instance. Currently they're stored as BINARY and I'm satisfied with both
performance and stability (I'm using H2 in embedded mode). Most of these
opaque data pieces are below the new limit of 1 MB but a few of them goes
well
Thank you for all the clarifications.
I read the discussion in
https://github.com/h2database/h2database/issues/1808. The minimal testcase
for 1808 reproduction does parallel updates and reads on a LOB-containing
row. Am I safe from issue 1808 If I don't do anything like that, i.e. if I
have a
Hi,
What's the relationship between MAX_LENGTH_INPLACE_LOB and
StreamStore.minBlockSize (currently set to 256)?
I'm asking because I'm trying to tune the way I store my data. I noticed
that if I pad every LOB I put inside H2 to at least 256 bytes (many of my
LOBs are smaller than that) I obtai
possible that H2 that uses MVStore as its
storage engine and adds a lot of other stuff on top of it (SQL) can be
faster than raw MVStore. Am I doing something wrong? If I don't, is there a
way I can mimic H2's way of using MVStore to improve commit performance?
Thanks,
wburzyns
51 matches
Mail list logo