[
https://issues.apache.org/jira/browse/AVRO-1045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeremy Lewi updated AVRO-1045:
------------------------------
Attachment: AVRO-1045.patch
The patch includes a new test case in the unittest.
> deepCopy of BYTES underflow exception
> -------------------------------------
>
> Key: AVRO-1045
> URL: https://issues.apache.org/jira/browse/AVRO-1045
> Project: Avro
> Issue Type: Bug
> Components: java
> Affects Versions: 1.6.2
> Reporter: Jeremy Lewi
> Assignee: Jeremy Lewi
> Priority: Minor
> Fix For: 1.6.3
>
> Attachments: AVRO-1045.patch
>
>
> In org.apache.avro.generic.GenericData.deepCopy - the code for copying a
> ByteBuffer is
> ByteBuffer byteBufferValue = (ByteBuffer) value;
> byte[] bytesCopy = new byte[byteBufferValue.capacity()];
> byteBufferValue.rewind();
> byteBufferValue.get(bytesCopy);
> byteBufferValue.rewind();
> return ByteBuffer.wrap(bytesCopy);
> I think this is problematic because it will cause an UnderFlow exception to
> be thrown if the ByteBuffer limit is less than the capacity of the byte
> buffer.
> My use case is as follows. I have ByteBuffer's backed by large arrays so I
> can avoid resizing the array every time I write data. So limit < capacity.
> When the data is written, or copied
> I think avro should respect this. When data is serialized, avro should
> automatically use the minimum number of bytes.
> When an object is copied, I think it makes sense to preserve the capacity of
> the underlying buffer as opposed to compacting it.
> So I think the code could be fixed by replacing get with
> byteBufferValue.get(bytesCopy, 0 , byteBufferValue.limit());
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira