[ 
http://issues.apache.org/jira/browse/HADOOP-54?page=comments#action_12423273 ] 
            
Arun C Murthy commented on HADOOP-54:
-------------------------------------

Regarding 'incremental decompress' in SequenceFile.Reader:

Maybe I'm miss something here - but isn't 1 decompress (of the whole block) 
followed by n reads of keys (or values) going to lead to the same amortized 
cost as m (where m < n) decompress and n reads? In that case I don't think the 
complexity of managing this complicated beast (maintaining status of how much 
is compressed, might have to decompress multiple times to get large values 
etc.) is worth it...


> SequenceFile should compress blocks, not individual entries
> -----------------------------------------------------------
>
>                 Key: HADOOP-54
>                 URL: http://issues.apache.org/jira/browse/HADOOP-54
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: io
>    Affects Versions: 0.2.0
>            Reporter: Doug Cutting
>         Assigned To: Arun C Murthy
>             Fix For: 0.5.0
>
>         Attachments: VIntCompressionResults.txt
>
>
> SequenceFile will optionally compress individual values.  But both 
> compression and performance would be much better if sequences of keys and 
> values are compressed together.  Sync marks should only be placed between 
> blocks.  This will require some changes to MapFile too, so that all file 
> positions stored there are the positions of blocks, not entries within 
> blocks.  Probably this can be accomplished by adding a 
> getBlockStartPosition() method to SequenceFile.Writer.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to