[
http://issues.apache.org/jira/browse/HADOOP-54?page=comments#action_12424470 ]
Arun C Murthy commented on HADOOP-54:
-------------------------------------
Addendum:
I spoke to Owen to confirm that it makes sense to implement 'lazy
decompression' of 'values' in block compressed files i.e. a series of calls to:
SequenceFile.Reader.next(Writable key)
will not decompress 'value' blocks until a call to either
SequenceFile.Reader.next(Writable key, Writable val) or
SequenceFile.Reader.getCurrentValue(Writable val) {explained below}
Going along the same trajectory it makes sense to add a 'getCurrentValue' api
to the Reader, enabling the user to look at the 'key' and only then decide if
he wants to fetch the 'value' (lazy decompression of 'value' holds here too;
with associated better performance).
Thoughts?
> SequenceFile should compress blocks, not individual entries
> -----------------------------------------------------------
>
> Key: HADOOP-54
> URL: http://issues.apache.org/jira/browse/HADOOP-54
> Project: Hadoop
> Issue Type: Improvement
> Components: io
> Affects Versions: 0.2.0
> Reporter: Doug Cutting
> Assigned To: Arun C Murthy
> Fix For: 0.5.0
>
> Attachments: VIntCompressionResults.txt
>
>
> SequenceFile will optionally compress individual values. But both
> compression and performance would be much better if sequences of keys and
> values are compressed together. Sync marks should only be placed between
> blocks. This will require some changes to MapFile too, so that all file
> positions stored there are the positions of blocks, not entries within
> blocks. Probably this can be accomplished by adding a
> getBlockStartPosition() method to SequenceFile.Writer.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira