[ 
http://issues.apache.org/jira/browse/HADOOP-54?page=comments#action_12422661 ] 
            
Arun C Murthy commented on HADOOP-54:
-------------------------------------

Issues which I came across while implementing the above proposal...

1. The implementation of the public interface

  SequenceFile.Writer.append(byte[] data, int start, int length. int keyLength)

  as it exists today, does not honour 'deflateValues' i.e. it does not compress 
'values' at all. I feel this is contrary to user's expectations since the other 
'append' interface does compress values and breaks the abstraction of 
'compressed' sequence files. I propose we remedy this now and add necessary 
support for the same here too. (I understand that it might be a break with 
existing behaviour, but I feel we should correct this right-away... we need to 
fix it some time or the other.). 

 I will also go ahead and add a 'rawAppend' public interface if the existing 
functionality (just write data to disc with heeding 'deflateValues') is deemed 
necessary.


2. I propose we add a public interface:
 
  void flush() throws IOException 

  to SequenceFile.Writer to let the user explictly compress and flush existing 
data in key/value buffers. 

 This api will also be used from existing 'close' (flush remaining data in 
buffers) and 'append' (flush buffers to dfs after they exceed the configured 
size) apis internally... the only point of contention is whether I should make 
this api 'public'.

3. Afaik I can't see a way to 'configure' the default 'minimum buffer size' 
since the SequenceFile class, as it exists, does not seem to have any access to 
a 'Configuration' object... 
  (... in my previous life Owen pointed out that making a 'comparator' class 
implement the 'Configurable' interface ensured that it's 'configure' api would 
be called by the framework; will this trick work again?!)

  I don't want to hardcode any values for 'minimum buffer size' nor does the 
idea of adding a new constructor with a 'Configuration' object as one of the 
params look very appealing... 

-*-*-

 Thoughts? 

> SequenceFile should compress blocks, not individual entries
> -----------------------------------------------------------
>
>                 Key: HADOOP-54
>                 URL: http://issues.apache.org/jira/browse/HADOOP-54
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: io
>    Affects Versions: 0.2.0
>            Reporter: Doug Cutting
>         Assigned To: Arun C Murthy
>             Fix For: 0.5.0
>
>         Attachments: VIntCompressionResults.txt
>
>
> SequenceFile will optionally compress individual values.  But both 
> compression and performance would be much better if sequences of keys and 
> values are compressed together.  Sync marks should only be placed between 
> blocks.  This will require some changes to MapFile too, so that all file 
> positions stored there are the positions of blocks, not entries within 
> blocks.  Probably this can be accomplished by adding a 
> getBlockStartPosition() method to SequenceFile.Writer.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to