[ http://issues.apache.org/jira/browse/HADOOP-54?page=comments#action_12423167 ] Owen O'Malley commented on HADOOP-54: -------------------------------------
Eric, I don't see how to implement both block compression, which is a huge win, and access to a pre-decompression representation. Especially if what you want to do with the pre-decompression representation is sorting or merging. Therefore, I was (and am) proposing that the "raw" access is a little less raw and that the byte[] representation is always decompressed. Am I missing something? This is an semantic change to the "raw" SequenceFile API, but I think it is required to get block-level compression. On a slight tangent, I think that the SequenceFile.Reader should not decompress the entire block but just enough to get the next key/value pair. > SequenceFile should compress blocks, not individual entries > ----------------------------------------------------------- > > Key: HADOOP-54 > URL: http://issues.apache.org/jira/browse/HADOOP-54 > Project: Hadoop > Issue Type: Improvement > Components: io > Affects Versions: 0.2.0 > Reporter: Doug Cutting > Assigned To: Arun C Murthy > Fix For: 0.5.0 > > Attachments: VIntCompressionResults.txt > > > SequenceFile will optionally compress individual values. But both > compression and performance would be much better if sequences of keys and > values are compressed together. Sync marks should only be placed between > blocks. This will require some changes to MapFile too, so that all file > positions stored there are the positions of blocks, not entries within > blocks. Probably this can be accomplished by adding a > getBlockStartPosition() method to SequenceFile.Writer. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
