[ 
https://issues.apache.org/jira/browse/HADOOP-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14611353#comment-14611353
 ] 

Hadoop QA commented on HADOOP-11644:
------------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 53s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   8m 12s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 25s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 13s | The applied patch generated  
78 new checkstyle issues (total was 2, now 80). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 3  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 40s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m  5s | The patch appears to introduce 3 
new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m 15s | Tests failed in 
hadoop-common. |
| | |  64m 45s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common |
| Failed unit tests | hadoop.io.compress.TestCodec |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12743192/HADOOP-11644.002.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a78d507 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7133/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7133/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7133/artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7133/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7133/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7133/console |


This message was automatically generated.

> Contribute CMX compression
> --------------------------
>
>                 Key: HADOOP-11644
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11644
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: io
>            Reporter: Xabriel J Collazo Mojica
>            Assignee: Xabriel J Collazo Mojica
>         Attachments: HADOOP-11644.001.patch, HADOOP-11644.002.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Hadoop natively supports four main compression algorithms: BZIP2, LZ4, Snappy 
> and ZLIB.
> Each one of these algorithms fills a gap:
> bzip2 : Very high compression ratio, splittable
> LZ4 : Very fast, non splittable
> Snappy : Very fast, non splittable
> zLib : good balance of compression and speed.
> We think there is a gap for a compression algorithm that can perform fast 
> compress and decompress, while also being splittable. This can help 
> significantly on jobs where the input file sizes are >= 1GB.
> For this, IBM has developed CMX. CMX is a dictionary-based, block-oriented, 
> splittable, concatenable compression algorithm developed specifically for 
> Hadoop workloads. Many of our customers use CMX, and we would love to be able 
> to contribute it to hadoop-common. 
> CMX is block oriented : We typically use 64k blocks. Blocks are independently 
> decompressable.
> CMX is splittable : We implement the SplittableCompressionCodec interface. 
> All CMX files are a multiple of 64k, so the splittability is achieved in a 
> simple way with no need for external indexes.
> CMX is concatenable : Two independent CMX files can be concatenated together. 
> We have seen that some projects like Apache Flume require this feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to