[ 
https://issues.apache.org/jira/browse/HADOOP-4012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12711194#action_12711194
 ] 

Abdul Qadeer commented on HADOOP-4012:
--------------------------------------

Looks like there is some problem with Hudson, atleast as far as core and 
contrib test are concerned.  If you see Hudson queue 
(http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/) you 
will see that many JIRAs are ending up with the same 4 or 5 test case failures. 
e.g.

http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/362/#showFailuresLink


http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/361/#showFailuresLink


http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/360/#showFailuresLink

http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/358/#showFailuresLink

Similarly though Hudson complains about test failures but does not list the 
failed tests (like in this JIRA's case and there are many others)

I will test my patch on a local box and will post the results here.

> Providing splitting support for bzip2 compressed files
> ------------------------------------------------------
>
>                 Key: HADOOP-4012
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4012
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: io
>    Affects Versions: 0.19.2
>            Reporter: Abdul Qadeer
>            Assignee: Abdul Qadeer
>             Fix For: 0.19.2
>
>         Attachments: Hadoop-4012-version1.patch, Hadoop-4012-version2.patch, 
> Hadoop-4012-version3.patch, Hadoop-4012-version4.patch, 
> Hadoop-4012-version5.patch, Hadoop-4012-version6.patch, 
> Hadoop-4012-version7.patch
>
>
> Hadoop assumes that if the input data is compressed, it can not be split 
> (mainly due to the limitation of many codecs that they need the whole input 
> stream to decompress successfully).  So in such a case, Hadoop prepares only 
> one split per compressed file, where the lower split limit is at 0 while the 
> upper limit is the end of the file.  The consequence of this decision is 
> that, one compress file goes to a single mapper. Although it circumvents the 
> limitation of codecs (as mentioned above) but reduces the parallelism 
> substantially, as it was possible otherwise in case of splitting.
> BZip2 is a compression / De-Compression algorithm which does compression on 
> blocks of data and later these compressed blocks can be decompressed 
> independent of each other.  This is indeed an opportunity that instead of one 
> BZip2 compressed file going to one mapper, we can process chunks of file in 
> parallel.  The correctness criteria of such a processing is that for a bzip2 
> compressed file, each compressed block should be processed by only one mapper 
> and ultimately all the blocks of the file should be processed.  (By 
> processing we mean the actual utilization of that un-compressed data (coming 
> out of the codecs) in a mapper).
> We are writing the code to implement this suggested functionality.  Although 
> we have used bzip2 as an example, but we have tried to extend Hadoop's 
> compression interfaces so that any other codecs with the same capability as 
> that of bzip2, could easily use the splitting support.  The details of these 
> changes will be posted when we submit the code.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to