Hi,

As for PreparedStatement, see
http://h2database.com/html/performance.html#database_performance_tuning"Use
Prepared Statements".

Also, I wonder if it's a good idea to insert so many rows with autocommit
enabled. I would disable autocommit, and then commit every 1000 rows or so.
This should improve performance for all databases (not just H2).

Regards,
Thomas



On Mon, Aug 12, 2013 at 6:12 AM, Pierre <[email protected]> wrote:

> Thanks Thomas but the profiler doesn't showed something interesting.
> Anyway, the issue is related to H2, it doesn't supports massive inserts.
> When the table size > 3 millions rows, it begins to be very slow. Adding
> LOG=0;UNDO_LOG=0;NOLOCK/WHATEVER optmisation just delay the ineluctable
> slow insertions, it optimize a lot but only if < 3 millions rows.
>
> I know that because I created a bench and just looks at the numbers :
>
> Inserting 1000 t1 records and 6000 t2 records (that's OK)
> ---
> MySQLDB... 0:00:02.646 (2646 ms)
> HSQLDB... 0:00:00.732 (732 ms)
> H2DB... 0:00:01.671 (1671 ms)
> ---
> MySQLDB... 0:00:01.842 (1842 ms)
> HSQLDB... 0:00:00.289 (289 ms)
> H2DB... 0:00:00.905 (905 ms)
> ---
> MySQLDB... 0:00:01.478 (1478 ms)
> HSQLDB... 0:00:00.231 (231 ms)
> H2DB... 0:00:00.662 (662 ms)
>
> Inserting 100000 t1 records and 600000 t2 records (H2 starts to show some
> slow issues)
> ---
> MySQLDB... 0:00:53.679 (53679 ms)
> HSQLDB... 0:00:18.128 (18128 ms)
> H2DB... 0:02:16.546 (136546 ms)
> ---
> MySQLDB... 0:00:47.110 (47110 ms)
> HSQLDB... 0:00:18.099 (18099 ms)
> H2DB... 0:00:58.712 (58712 ms)
> ---
> MySQLDB... 0:00:54.933 (54933 ms)
> HSQLDB... 0:00:17.900 (17900 ms)
> H2DB... 0:01:01.120 (61120 ms)
> ---
>
>
> Inserting 300000 t1 records and 1800000 t2 records (starts to be very slow)
> ---
> MySQLDB... 0:02:55.809 (175809 ms)
> HSQLDB... 0:01:15.427 (75427 ms)
> H2DB... 0:15:06.436 (906436 ms)
> ---
> MySQLDB... 0:02:58.246 (178246 ms)
> HSQLDB... 0:01:11.089 (71089 ms)
> H2DB... 0:02:35.589 (155589 ms)
> ---
> MySQLDB... 0:02:28.367 (148367 ms)
> HSQLDB... 0:01:11.286 (71286 ms)
> H2DB... 0:02:29.787 (149787 ms)
> ---
>
> Inserting 1000000 t1 records and 6000000 t2 records (just unusable with
> such a number of rows)
> ---
> MySQLDB... 0:14:36.311 (876311 ms)
> HSQLDB... 0:07:09.021 (429021 ms)
> H2DB... 0:53:57.432 (3237432 ms)
> ---
> MySQLDB... 0:11:34.360 (694360 ms)
> HSQLDB... 0:07:16.572 (436572 ms)
> H2DB... 0:25:06.446 (1506446 ms)
> ---
> MySQLDB... 0:11:10.191 (670191 ms)
> HSQLDB... 0:07:28.551 (448551 ms)
> H2DB... too long, aborted.
>
> I uploaded the bench on github https://github.com/serphacker/dbbench
>
> Le dimanche 11 août 2013 07:49:19 UTC+2, Thomas Mueller a écrit :
>>
>> Hi,
>>
>> I'm not sure if you received / read my previous mail... here again:
>>
>> It's hard to say what the problem might be. I suggest to find out what's
>> going on using the built-in profiler: http://h2database.**
>> com/html/performance.html#**built_in_profiler<http://h2database.com/html/performance.html#built_in_profiler>
>>
>> See also the related documentation on that page.
>>
>> Regards,
>> Thomas
>> Am 11.08.2013 04:42 schrieb "Pierre" <[email protected]>:
>>
>>>
>>> Could this issue due to the fact I call createStatement one time and I
>>> use this Statement for all the queries in my application ? I ran my
>>> application without h2 and dumped all sql request to a file (and no speed
>>> the problem in this mode, so the DB is really the bottleneck), so I can
>>> reproduce the issue in a basic example more easy to debug (while read line;
>>> updateQuery()) maybe this is related to some option like LOG/UNDO LOG/LOCK
>>> ? I disabled all of them for the sql dump import and it was very faster.
>>>
>>>
>>>
>>> Le samedi 10 août 2013 17:37:03 UTC+2, Pierre a écrit :
>>>>
>>>> Hello,
>>>>
>>>> I'm running h2 in embedded mode, I have 3 tables, each one having
>>>> severals millions of rows. The db file size is ~ 2Go. I'm using only one
>>>> concurrent connection to the database. Most of the query are INSERT and
>>>> MERGE and are very simple (no join). I have a big Heap size (-Xmx10g).
>>>>
>>>> Sometime it tooks a very long time to execute the query, more than 15
>>>> seconds. The problem is it doesn't affect a specific query it affects all
>>>> the queries.
>>>>
>>>> So approximatly, every 30 seconds a random query will take more than 15
>>>> seconds to execute instead of some milliseconds. And it will lock all my
>>>> application because I synchronize on application level (only one connection
>>>> to the DB). Beside, more the DB is growing, more this random slow query is
>>>> long (it was 13 seconds at the beginning, now 18 seconds).
>>>>
>>>> Do you have any idea what could be the reason of this ? Does H2 in
>>>> embedded mode can handle DB with severals million of rows ? Any tweaks ?
>>>>
>>>>  --
>>> You received this message because you are subscribed to the Google
>>> Groups "H2 Database" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to h2-database...@**googlegroups.com.
>>> To post to this group, send email to [email protected].
>>>
>>> Visit this group at 
>>> http://groups.google.com/**group/h2-database<http://groups.google.com/group/h2-database>
>>> .
>>> For more options, visit 
>>> https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
>>> .
>>>
>>>
>>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "H2 Database" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/h2-database.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to