[
http://issues.apache.org/jira/browse/HADOOP-54?page=comments#action_12420611 ]
Arun C Murthy commented on HADOOP-54:
-------------------------------------
Here are some thoughts on how to go about this... inputs are much appreciated!
<thoughts>
The key idea is to compress blocks and not individual 'values' as is done
(optionally) today in a SequenceFile.
The plan is to have a configurable buffer (say default of 1MB? or 10MB?),
fill it up with key/value pairs and then compress them. When the buffer is
(almost) full we compress the keys together into a block and values into the
following block and then write them out to file along with necessary headers
and markers. The point of compressing keys and values separately (as Eric
points out) is:
a) (hopefully) better compression since 'like' items are compressed
together.
b) if need be, iterating over 'keys' itself is faster since we don't need
to uncompress 'values'.
We could also write out 'sync' markers everytime the whole key-value
compressed buffer is written out to dfs or we can double up the sync and
end-of-block marker, thus these sync-markers also double up as
end-of-compressed-block-markers. Of course the 'sync' marker is similar to the
one used today in SequenceFiles. (thoughts?)
E.g.
a) configured buffer size - 4096b
b) key/values
keyA - 32b, valueA - 1024b
keyB - 64b, valueB - 2048b
c) compressedSize(keyA+keyB) - 75b
d) compressedSize(valueA+valueB) - 2500b
On disk:
no. of k/v pairs (c) (d)
| | |
------------------------------------------------------------------------------------------------------------------------------------
| sync-marker | 2 | 32 | 64 | 75 | 1024 | 2048 | 2500 | compressedKeys
(blob) | compressedValues (blob) | sync-marker |
------------------------------------------------------------------------------------------------------------------------------------
| |
______ ______
(key sizes) (value sizes)
Non-graphical version:
-------------------------
sync,2,32,64,75,1024,2048,2500,compressedKeys,compressedValuesBlob,sync.
Clarification: All lengths above are uncompressed and as is on disk.
(write/read via writeInt/readInt apis).
Points to ponder:
-----------------
a) Since we need to store keys and values in separate (compressed) blocks
on disk does it make sense to:
i) use 2 buffers (one for each) to store them before compressing i.e.
before we hit the configured (1/10/* MB) limit (both buffers combined) - fairly
simple to implement!
ii) interleave them in the buffer and then make 2 passes to compress
and write to disk.
b) Strategy for buffer-size vis-a-vis dfs-block-size? Should we pad out
after compressing and before writing to disk? Better to ignore this for now and
let dfs handle it better in future?
c) Thoughts by Eric/Doug w.r.t custom compressors for keys/values.
</thoughts>
thanks,
Arun
> SequenceFile should compress blocks, not individual entries
> -----------------------------------------------------------
>
> Key: HADOOP-54
> URL: http://issues.apache.org/jira/browse/HADOOP-54
> Project: Hadoop
> Type: Improvement
> Components: io
> Versions: 0.2.0
> Reporter: Doug Cutting
> Assignee: Michel Tourn
> Fix For: 0.5.0
>
> SequenceFile will optionally compress individual values. But both
> compression and performance would be much better if sequences of keys and
> values are compressed together. Sync marks should only be placed between
> blocks. This will require some changes to MapFile too, so that all file
> positions stored there are the positions of blocks, not entries within
> blocks. Probably this can be accomplished by adding a
> getBlockStartPosition() method to SequenceFile.Writer.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira