[ 
https://issues.apache.org/jira/browse/HADOOP-3315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12629694#action_12629694
 ] 

Hong Tang commented on HADOOP-3315:
-----------------------------------

> Why did you write your own variable length integer encoding? is this superior 
> to protocolbuffers encoding?it seems like writing your own encoding/decoding 
> format makes it harder for people to implement this in other languages, no? 

No, I have not looked at protocol buffer. Does it have a VInt/VLong 
implementation that can be used independently?

As I commented previously, the only variable length integer format I found is 
the one from WriteableUtils, which has the odd property of only doubling the 
range of representation when we increase from 1B to 2B.

I am open for adopting VInt/VLong encoding schemes that are relatively 
standardized and provides a good range of coverage for small to medium integers 
(which is the primary purpose of having variable length integers).



> New binary file format
> ----------------------
>
>                 Key: HADOOP-3315
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3315
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: io
>            Reporter: Owen O'Malley
>            Assignee: Amir Youssefi
>         Attachments: HADOOP-3315_TFILE_PREVIEW.patch, 
> HADOOP-3315_TFILE_PREVIEW_WITH_LZO_TESTS.patch, TFile Specification Final.pdf
>
>
> SequenceFile's block compression format is too complex and requires 4 codecs 
> to compress or decompress. It would be good to have a file format that only 
> needs 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to