[ 
https://issues.apache.org/jira/browse/QPID-7791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16051775#comment-16051775
 ] 

Keith Wall commented on QPID-7791:
----------------------------------

{{MessageMetaDataBinding#entryToObject}} (and is Derby counterpart) needs to 
cater for the possibility that the metadata is larger than the default direct 
buffer size.    They need to allocate an list of QBBs and pass the list to 
MessageMetaDataType#createMetaData.  MessageMetaDataType#createMetaData 
currently accepts a single QBB.    If it were to naively call 
{{QpidByteBuffer.allocateDirect(entry.getSize()))}}, in the case when the 
metadata is larger than the default direct buffer size, this will cause 
reliance on the JVM for deallocating our larger than cacheable chunks.


> Recover metadata into direct memory
> -----------------------------------
>
>                 Key: QPID-7791
>                 URL: https://issues.apache.org/jira/browse/QPID-7791
>             Project: Qpid
>          Issue Type: Improvement
>          Components: Java Broker
>            Reporter: Keith Wall
>             Fix For: qpid-java-broker-7.0.0
>
>
> Currently, the message store on reading of the metadata creates heap buffers 
> rather than direct.  This code path is used by both recovery and re-reading 
> metadata following a flow to disk.
> This approach means that the Broker footprint differ:  If messages come in on 
> the wire, content and metadata (at least initially, is in direct), if 
> messages are recovered, metadata is in heap.   This makes giving advice 
> around the size of Qpid's memory more difficult.   If the user makes poor 
> choice a situation is possible where the Broker may not be restartable 
> because there is too little heap to recover all the metadata.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to