2009/3/10 Mikkel Kamstrup Erlandsen <[email protected]>:
> 2009/3/6 Thomas Mueller <[email protected]>
>>
>> Hi,
>>
>> I'm sorry I don't understand why getLastSetBit can take much time...
>> Could you describe your use case please? What database version do you
>> use? If you use version 1.0.x, could you try if upgrading to 1.1.x
>> solves the problem?
>>
>
> Ok, now I've tested it out with 1.1.108. Off the bat it appears to be
> a lot faster than 1.0 at inserting records (almost a factor 2 (and a
> factor 3 faster than PostgresQL)), but over time I see severe
> performance degradation in the inserts (as I also do with 1.0).
>
> Unfortunately I am having a hard time setting up the profiler today as
> the network seems to be mad at me or something...
>
> Also I have yet to test this with my patch enabled (I will only do
> this if profiling the app turns up BitField.getLastSetBit() again).
>
> Here's a bit of metadata on my setup: My record corpus right now is
> about 3M records (but when I hit production it'll be more like 10M).
> Inserts are flying until I reach about 1M at which it slowly decreases
> to about half speed (which is still faster than fx PostgresQL :-) on
> the same data). Then somewhere around 3M the INSERT rate slows to a
> halt at about ½ record/sec.
>
> My DB looks like this:
> /* RECORDS TABLE (this is where it where everything happens) */
> CREATE TABLE summa_records (id VARCHAR(255) PRIMARY KEY,
>                                                          base VARCHAR(31),
>                                                          deleted INTEGER,
>                                                         indexable INTEGER,
>                                                         hasRelations INTEGER,
>                                                         data  BYTEA,
>                                                         ctime BIGINT,
>                                                         mtime BIGINT,
>                                                         meta  BYTEA);
>
> CREATE UNIQUE INDEX i ON summa_records(id);
> CREATE UNIQUE INDEX mb ON summa_records(mtime,base);

I've been banging on this a bit more now. It seems that I have a
memory problem, although I am not sure if this is explanation for
everything.

jstat reports that I had full GC every second or so, but I am a bit
intrigued as to why I am using so much memory - this was running with
-Xmx512m. Then I tried bumping the Xmx to 1024m and that put the Full
GC rate down to about 1-2 per minute, but the INSERT rate stays down
about ½ record/sec.

I'll try and bump the mem to 4G and see what happens (although I
wouldn't want that on my production system).

Cheers,
Mikkel

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to