[ 
https://issues.apache.org/jira/browse/HADOOP-735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12465607
 ] 

Milind Bhandarkar commented on HADOOP-735:
------------------------------------------

Indeed. This was necessary because the variable sized integer and longs were 
throwing IOException in their read methods. The serialization of BytesWritable 
uses int. For a record that contains only a buffer, the generated code was 
catching IOException, and resulted in a compile-error for trying to catch an 
exception that was not thrown. That's why it is part of this patch.

> The underlying data structure, ByteArrayOutputStream,  for buffer type of 
> Hadoop record is inappropriate
> --------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-735
>                 URL: https://issues.apache.org/jira/browse/HADOOP-735
>             Project: Hadoop
>          Issue Type: Bug
>          Components: record
>    Affects Versions: 0.9.2
>            Reporter: Runping Qi
>         Assigned To: Milind Bhandarkar
>             Fix For: 0.11.0
>
>         Attachments: BytesWritable.patch
>
>
> With ByteArrayOutputStream as the underlying data structure for a buffer, the 
> user is forced to convert it into a byte [] object in order to do any 
> operations other than sequence append on the buffer. The convertion will 
> create a new copy of bytes. That will cause huge performance problem. 
> It seems BytesWritable is a better replacement.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to