[
http://issues.apache.org/jira/browse/HADOOP-54?page=comments#action_12422961 ]
Arun C Murthy commented on HADOOP-54:
-------------------------------------
Rebuttals:
1. append
I like Owen's idea about generalising the interface to:
append(byte[] key, int keyOffset, int keyLength, byte[] value, int
valueOffset, int valueLength)
with it's associated 'clarity' for user and advantage that it precludes an
extra copy into a single buffer... couple of +1s and I'll take this path.
@Owen: I understand that this interface currently appends a 'preserialized'
key/value pair, but as you point out with 'compressed blocks' this gets
worrisome in the long run (things like 'custom compression' will require
'serializedRef' like objects soon enough) ...
How about letting the user pass in the preserialized key/value and then we
will still go ahead and honour 'deflateValues' in the append? Honouring
'compress' directives will ensure consistent behaviour with rest of apis (read:
append(Writable key, Writable value) and also uncompress in the
SequenceFile.Reader.next call will ensure the what-you-store-is-what-you-get
contract.
Otherwise a true 'rawAppend' will mean (especially in 'compressed blocks'
context) that I will need to create a 'block' with a single key/value pair and
write out to disk ...
Summarising: We can switch to the 'general' append interface and honour
'compress' directives in it... ensuring consistency & clarity. (I also
volunteer to fix 'older' append calls in SequenceFile.java; Owen can then
2. flush
I should have worded things more carefully... I was looking to see if there
is a compelling use case for this 'already'.
Looks like there isn't... I'll drop this.
(Adding a public 'flush' later is infinitely easier than adding now and
removing later... :) )
@Eric: Yep, the 'flush' does create a block boundary, it's used internally in
2 cases for now: (a) sizeof(keyBuffer+valueBuffer) exceeds minBlockSize (b)
When the 'close' api is called.
3. configuration
I concur with need to keep things simple... I'll just hardcode a 'sane' value
for now.
(Yes, there is a way via the constructor to set the buffersize on creation.)
(PS: I do hear bells in my head when I see, as it exists, SequenceFile.Reader
gets a 'Configuration' object via the constructor and the 'symmetric'
SequenceFile.Writer doesn't... but that's topic for another discussion.)
> SequenceFile should compress blocks, not individual entries
> -----------------------------------------------------------
>
> Key: HADOOP-54
> URL: http://issues.apache.org/jira/browse/HADOOP-54
> Project: Hadoop
> Issue Type: Improvement
> Components: io
> Affects Versions: 0.2.0
> Reporter: Doug Cutting
> Assigned To: Arun C Murthy
> Fix For: 0.5.0
>
> Attachments: VIntCompressionResults.txt
>
>
> SequenceFile will optionally compress individual values. But both
> compression and performance would be much better if sequences of keys and
> values are compressed together. Sync marks should only be placed between
> blocks. This will require some changes to MapFile too, so that all file
> positions stored there are the positions of blocks, not entries within
> blocks. Probably this can be accomplished by adding a
> getBlockStartPosition() method to SequenceFile.Writer.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira