[
http://issues.apache.org/jira/browse/HADOOP-54?page=comments#action_12423037 ]
Arun C Murthy commented on HADOOP-54:
-------------------------------------
> I suggest adding the binary append API suggested by Owen and deprecating the
> old binary append API, but making it work back-compatibly. Thus it should
> accept pre-compressed (if compression is enabled) values, de-compress them,
> then call the new append method.
My hunch is that we do not need to worry about 'pre-compressed' values since
as of today both the 'raw' apis do not honour it... is this true?
In-fact we could take the route in which 'append' compresses whatever data is
passed along, thus possibly compressing data twice. With the 'symmetric' call
to next (which decompresses) we give back the data which the user passed along
in the first place... I did have a chat with Owen about this and we both felt
this could work.
> We unfortunately lose the ability to move individual compressed values
> around. If a mapper does not touch values, it would be best to only
> decompress values on reduce nodes, rather than decompress and recompress them
> on map nodes, since compression can be computationally expensive.
I can see a way around this if it really will make a difference...
We can take the path that values are decompressed only 'on demand' i.e. a
series of calls to SequenceFile.Reader.next(Writable key) does not need to
decompress 'valBuffer' (or even valLengthsBuffer). Hence when we read a
compressed 'block' we need not decompress values till we see a call to either
SequenceFile.Reader.next(Writable key, Writable value) or
SequenceFile.Reader.next(DataOutputBuffer buffer).
Implementing this 'lazy decompression' of values is slightly more complex...
worth it?
-*-*-
PS:
1. SequenceFile.Reader.next(DataOutputBuffer buffer) should be changed to
SequenceFile.Reader.next(DataOutputBuffer keyBuffer, DataOutputBuffer
valBuffer) similar to the 'raw' append api?
2. Does it make sense to have compression configurable for both keys and
values separately? i.e. let user specify (during creation) whether he preferes
to compress 'keys' or 'values' or both? Overkill for now? Maybe makes sense
once we move to custom compressors for each?
> SequenceFile should compress blocks, not individual entries
> -----------------------------------------------------------
>
> Key: HADOOP-54
> URL: http://issues.apache.org/jira/browse/HADOOP-54
> Project: Hadoop
> Issue Type: Improvement
> Components: io
> Affects Versions: 0.2.0
> Reporter: Doug Cutting
> Assigned To: Arun C Murthy
> Fix For: 0.5.0
>
> Attachments: VIntCompressionResults.txt
>
>
> SequenceFile will optionally compress individual values. But both
> compression and performance would be much better if sequences of keys and
> values are compressed together. Sync marks should only be placed between
> blocks. This will require some changes to MapFile too, so that all file
> positions stored there are the positions of blocks, not entries within
> blocks. Probably this can be accomplished by adding a
> getBlockStartPosition() method to SequenceFile.Writer.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira