Hi,
By the way, those bugs have been fixed now. There is now a WriteBuffer that
auto-increases capacity (similar to a ByteArrayOutputStream). And there was
a bug in the LIRS cache that has been fixed.
Regards,
Thomas
On Friday, October 11, 2013, Thomas Mueller wrote:
> Hi,
>
> > DataUtils.ensureCapacity() isn't making a large enough ByteBuffer?
>
> Yes, or maybe the method is not called in some cases where it should.
> Maybe it's better to wrap the ByteBuffer into a WriteBuffer that internally
> calls ensureCapacity when required. The current mechanism might be a bit
> more efficient, but also much more error prone.
>
> > CacheLongKeyLIRS
>
> Yes, this looks like a bug. While there are many test cases for this
> class, what is not tested is overflow (when using a large cache size). I
> just found the following code runs into an endless loop after about 54'458'000
> entries. This is a really large cache, given that by default there are 16
> segments (so it would fail with 16 times that number of entries), but it's
> still a bug. Strange is that it doesn't always fail at the same entry, so
> it might be a combination of a JVM bug and a bug in the cache. There might
> be other overflow problems.
>
> int size = 100 * 1024 * 1024;
> CacheLongKeyLIRS<Integer> test = new CacheLongKeyLIRS<Integer>(size,
> 1, 1, 0);
> Integer value = 1;
> for (int i = 0; i < size; i++) {
> test.put(i, value);
> }
>
> Regards,
> Thomas
>
>
>
>
>
>
>
>
> On Fri, Oct 11, 2013 at 8:33 AM, Noel Grandin
> <[email protected]<javascript:_e({}, 'cvml', '[email protected]');>
> > wrote:
>
>> Hi
>>
>> I had a quick look at this exception that Brian reported:
>>
>>
>>
>> Caused by: java.lang.NullPointerException
>> at
>> org.h2.mvstore.cache.**CacheLongKeyLIRS$Segment.**
>> pruneStack(CacheLongKeyLIRS.**java:824)
>> at
>> org.h2.mvstore.cache.**CacheLongKeyLIRS$Segment.**
>> convertOldestHotToCold(**CacheLongKeyLIRS.java:815)
>> at
>> org.h2.mvstore.cache.**CacheLongKeyLIRS$Segment.**
>> evict(CacheLongKeyLIRS.java:**783)
>> at
>> org.h2.mvstore.cache.**CacheLongKeyLIRS$Segment.put(**
>> CacheLongKeyLIRS.java:711)
>> at
>> org.h2.mvstore.cache.**CacheLongKeyLIRS.put(**
>> CacheLongKeyLIRS.java:162)
>> at org.h2.mvstore.MVStore.**readPage(MVStore.java:1443)
>> at org.h2.mvstore.MVMap.readPage(**MVMap.java:759)
>> at org.h2.mvstore.Page.**getChildPage(Page.java:207)
>> at org.h2.mvstore.MVMap.**binarySearch(MVMap.java:449)
>> at org.h2.mvstore.MVMap.**binarySearch(MVMap.java:450)
>> at org.h2.mvstore.MVMap.**binarySearch(MVMap.java:450)
>> at org.h2.mvstore.MVMap.get(**MVMap.java:431)
>>
>>
>>
>>
>> I suspect that this code in CacheLongKeyLIRS$Segment is the problem:
>> private void evict(Entry<V> newCold) {
>> // ensure there are not too many hot entries:
>> // left shift of 5 is multiplication by 32, that means if
>> there are less
>> // than 1/32 (3.125%) cold entries, a new hot entry needs to
>> become cold
>> while ((queueSize << 5) < mapSize) {
>> convertOldestHotToCold();
>> }
>>
>> "queueSize" is an int , and if the queue gets big enough, the "<< 5" will
>> operation will generate a zero because it will run move all of the bits
>> outside the available 32 bits.
>>
>> I think the code should look like:
>> while ((((long)queueSize) << 5) < mapSize) {
>> convertOldestHotToCold();
>> }
>> or maybe
>> while (queueSize < (mapSize >> 5)) {
>> convertOldestHotToCold();
>> }
>>
>>
>>
>
--
You received this message because you are subscribed to the Google Groups "H2
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.