Using the following I get an java.nio.BufferUnderflowException

// where pdf is a ByteBuffer from my Avro stream
int size = pdf.remaining();
byte[] buf = new byte[size];
pdf.get(buf,0,size);

The pathology I am currently seeing is when I write a file out (from data 
contained in the Avro) it has trailing data from the previous larger file.

data = [[smaller file] extra data from previous file ]

contained in the ByteBuffer


On Mar 18, 2011, at 7:25 PM, David Rosenstrauch wrote:

> I think - and someone please correct me if I'm wrong - the offset is always 
> zero, and the length is byteBuffer.remaining().
> 
> So you would make a call something like:
> 
> byteBuffer.get(byteArray, 0, byteBuffer.remaining())
> 
> Then byteArray would contain the buffer's contents.
> 
> HTH,
> 
> DR
> 
> On 03/18/2011 08:22 PM, sean jensen-grey wrote:
>> I have a large sequence of pdfs stored in an avro file as part of a larger 
>> structure.
>> 
>> I have found a bug in my code where I was calling
>> 
>>     byteBuffer.array() to get back the byte[], this is incorrect as this is 
>> entire backing store and NOT the contents of the element stored in Avro.
>> 
>> How/where do I get the offset and the length of the ByteBuffer returned from 
>> Avro?
>> 
>> The convenience classes were generated via the maven plugin so my Record 
>> signature is
>> 
>>      MyRecord extends org.apache.avro.specific.SpecificRecordBase implements 
>> org.apache.avro.specific.SpecificRecord
>> 
>> The avro schema entry is
>> 
>> {
>>      "name" : "pdfs",
>>      "type" :  {  "type" : "array", "items": "bytes" }
>> }
> 

Reply via email to