[ 
https://issues.apache.org/jira/browse/JENA-91?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13094537#comment-13094537
 ] 

Andy Seaborne commented on JENA-91:
-----------------------------------

The Report_JENA91 test case shows that the NodeTableTrans was not properly 
resetting after all the data was written to the base table.  This is not fixed 
in SVN -- the requirement is that NodeTableTrans.allocOffset pass through to 
the underlying base node table, not attempt to calculate the allocation offset. 
 The underlying  base node table may be further updated by later transactions 
committing a NodeTableTrans so any calculated/cached information used for 
offset calculation is then wrong.

This explains seeing "Different ids allocated" and "Impossibly large object" 
depending on how the node data is accessed or corrupted later.

TxTDB build fixes this 20110831.124751-10.

This might also explain JENA-96, JENA-97 (not definitely but without a test 
case for those it's hard to be sure one way of the other).  Closing all 
possible related JIRAs; we can reopen them if and when evidence based on the 
31/Aug or later build shows there to be a problem.

> extremely large buffer is being created in ObjectFileStorage
> ------------------------------------------------------------
>
>                 Key: JENA-91
>                 URL: https://issues.apache.org/jira/browse/JENA-91
>             Project: Jena
>          Issue Type: Bug
>          Components: TDB
>            Reporter: Simon Helsen
>            Assignee: Andy Seaborne
>            Priority: Critical
>         Attachments: JENA-91_NodeTableTrans_r1159121.patch, 
> TestTransSystem.patch, TestTransSystem2.patch, TestTransSystem3.patch
>
>
> I tried to debug the OME and check why a bytebuffer is causing my native 
> memory to explode in almost no time. It all seems to happen in this bit of 
> code in com.hp.hpl.jena.tdb.base.objectfile.ObjectFileStorage (lines 243 
> onwards)
>   // No - it's in the underlying file storage.
>         lengthBuffer.clear() ;
>         int x = file.read(lengthBuffer, loc) ;
>         if ( x != 4 )
>             throw new 
> FileException("ObjectFile.read("+loc+")["+filesize+"]["+file.size()+"]: 
> Failed to read the length : got "+x+" bytes") ;
>         int len = lengthBuffer.getInt(0) ;
>         ByteBuffer bb = ByteBuffer.allocate(len) ;
> My debugger shows that x==4. It also shows the lengthBuffer has the following 
> content: [111, 110, 61, 95]. This amounts to the value of len=1869495647, 
> which is rather a lot :-) Obviously, the next statement (ByteBuffer.allocate) 
> causes the OME.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to