[ 
https://issues.apache.org/jira/browse/HADOOP-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HADOOP-7076:
---------------------------------

    Attachment: HADOOP-7076.patch

This patch now passes all unit tests.

Changes present in this file:
- Added additional method getBytesRead() to the 
org.apache.hadoop.io.compress.Decompressor interface to be able to query the 
position of the underlying file.
- Added the option to decrease the blocksize used by the DecompressorStream to 
read the disk file and feed the decompressor (Needed to get the required 
accuracy).
- Added SplittableGzipCodec that allows splitting Gzipped input files.

- Added TestSplittableCodecSeams that tests if all the splits are seamless: No 
duplicate records and no missing records.
- Fixes several bugs in TestCodec.java
   - Reset of decompressor
   - Writing an number in a binary form into a file that is later read and 
parsed as a text file (now all textual)
   - Naming :  no more "Splitable" in the touched unit test files.


> Splittable Gzip
> ---------------
>
>                 Key: HADOOP-7076
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7076
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: io
>            Reporter: Niels Basjes
>         Attachments: HADOOP-7076.patch
>
>
> Files compressed with the gzip codec are not splittable due to the nature of 
> the codec.
> This limits the options you have scaling out when reading large gzipped input 
> files.
> Given the fact that gunzipping a 1GiB file usually takes only 2 minutes I 
> figured that for some use cases wasting some resources may result in a 
> shorter job time under certain conditions.
> So reading the entire input file from the start for each split (wasting 
> resources!!) may lead to additional scalability.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to