[ 
https://issues.apache.org/jira/browse/HADOOP-3315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12633970#action_12633970
 ] 

Hong Tang commented on HADOOP-3315:
-----------------------------------

bq. -1 on decompressing a block twice if it can be avoided.
What I mean is, currently TFile does not directly support random lookup. The 
only way for you to perform this is by doing the following: 
<code>
Location l = reader.locate(key);
Scanner scanner = reader.createScanner(l, reader.end());
scanner.getKey(key);
scanner.getValue(value);
scanner.close();
</code>
And the above code will lead to reading the compressed block twice. There are 
multiple ways to support random look up without incurring this overhead, and I 
need to take a closer look and figure out what is the best way to support it.

> New binary file format
> ----------------------
>
>                 Key: HADOOP-3315
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3315
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: io
>            Reporter: Owen O'Malley
>            Assignee: Amir Youssefi
>         Attachments: HADOOP-3315_20080908_TFILE_PREVIEW_WITH_LZO_TESTS.patch, 
> HADOOP-3315_20080915_TFILE.patch, TFile Specification Final.pdf
>
>
> SequenceFile's block compression format is too complex and requires 4 codecs 
> to compress or decompress. It would be good to have a file format that only 
> needs 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to