2009/3/10 Mikkel Kamstrup Erlandsen <[email protected]>:
> 2009/3/10 Mikkel Kamstrup Erlandsen <[email protected]>:
>> 2009/3/6 Thomas Mueller <[email protected]>
>>>
>>> Hi,
>>>
>>> I'm sorry I don't understand why getLastSetBit can take much time...
>>> Could you describe your use case please? What database version do you
>>> use? If you use version 1.0.x, could you try if upgrading to 1.1.x
>>> solves the problem?
>>>
>>
>> Ok, now I've tested it out with 1.1.108. Off the bat it appears to be
>> a lot faster than 1.0 at inserting records (almost a factor 2 (and a
>> factor 3 faster than PostgresQL)), but over time I see severe
>> performance degradation in the inserts (as I also do with 1.0).
>>
>> Unfortunately I am having a hard time setting up the profiler today as
>> the network seems to be mad at me or something...
>>
>> Also I have yet to test this with my patch enabled (I will only do
>> this if profiling the app turns up BitField.getLastSetBit() again).
>>
>> Here's a bit of metadata on my setup: My record corpus right now is
>> about 3M records (but when I hit production it'll be more like 10M).
>> Inserts are flying until I reach about 1M at which it slowly decreases
>> to about half speed (which is still faster than fx PostgresQL :-) on
>> the same data). Then somewhere around 3M the INSERT rate slows to a
>> halt at about ½ record/sec.
>>
>> My DB looks like this:
>> /* RECORDS TABLE (this is where it where everything happens) */
>> CREATE TABLE summa_records (id VARCHAR(255) PRIMARY KEY,
>>                                                          base VARCHAR(31),
>>                                                          deleted INTEGER,
>>                                                         indexable INTEGER,
>>                                                         hasRelations INTEGER,
>>                                                         data  BYTEA,
>>                                                         ctime BIGINT,
>>                                                         mtime BIGINT,
>>                                                         meta  BYTEA);
>>
>> CREATE UNIQUE INDEX i ON summa_records(id);
>> CREATE UNIQUE INDEX mb ON summa_records(mtime,base);
>
> I've been banging on this a bit more now. It seems that I have a
> memory problem, although I am not sure if this is explanation for
> everything.
>
> jstat reports that I had full GC every second or so, but I am a bit
> intrigued as to why I am using so much memory - this was running with
> -Xmx512m. Then I tried bumping the Xmx to 1024m and that put the Full
> GC rate down to about 1-2 per minute, but the INSERT rate stays down
> about ½ record/sec.
>
> I'll try and bump the mem to 4G and see what happens (although I
> wouldn't want that on my production system).

Ok, I think I can free H2 of all charges :-) Atleast version 1.1.108.

I have it at 5M records now and the INSERT performace has only dropped
10-20 from the initial performance.

The conclusion is a bit hazy since I did not have the time to
invesitgate this properly. In the meantime I have migrated all of the
code to group 100 INSERTs per transaction and used
Connection.TRANSACTION_READ_UNCOMMITTED where appropriate. On top of
that I had a contention problem in my ingest workflow so lots of
factors could have caused the apparent slowdown.

As a side note it should be added that I have H2 blazing now. I can
saturate the ingest workflow which is based on a Java XMLStreamReader
:-) Kudos to you Thomas!

-- 
Cheers,
Mikkel

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to