[ http://issues.apache.org/jira/browse/HADOOP-54?page=comments#action_12422665 ] Owen O'Malley commented on HADOOP-54: -------------------------------------
The "raw" append/next interface for SequenceFile is intended to get the raw bytes from the file. Its intended use was for doing things like merging and sorting where the values don't need to be instantiated. So the lack of decompression was deliberate. However, in the switch to block compression, that doesn't make sense. In the new block compression reader and writer, just treat them as a key or value that has already been serialized for you. > SequenceFile should compress blocks, not individual entries > ----------------------------------------------------------- > > Key: HADOOP-54 > URL: http://issues.apache.org/jira/browse/HADOOP-54 > Project: Hadoop > Issue Type: Improvement > Components: io > Affects Versions: 0.2.0 > Reporter: Doug Cutting > Assigned To: Arun C Murthy > Fix For: 0.5.0 > > Attachments: VIntCompressionResults.txt > > > SequenceFile will optionally compress individual values. But both > compression and performance would be much better if sequences of keys and > values are compressed together. Sync marks should only be placed between > blocks. This will require some changes to MapFile too, so that all file > positions stored there are the positions of blocks, not entries within > blocks. Probably this can be accomplished by adding a > getBlockStartPosition() method to SequenceFile.Writer. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
