[ 
https://issues.apache.org/jira/browse/HADOOP-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14600513#comment-14600513
 ] 

Xabriel J Collazo Mojica commented on HADOOP-11644:
---------------------------------------------------

Hi folks,

I am attaching an initial patch for this work. I modeled this integration on 
Snappy's at HADOOP-7206.

Note that, different to more typical compressions like Snappy and BZip2, CMX 
was designed specifically for use in a Hadoop context, and is not available as 
a separate RPM that would install the native code. CMX is also self contained 
in a couple C++ classes and the Java bindings thru 
org.apache.hadoop.io.compress.CompressionCodec. Given this, the attached patch 
includes CMX core source, which compiles directly into libhadoop.so. I think 
this is a reasonable approach given the size of CMX. This is how the native 
code is declared on the CMake file for libhadoop.so compilation:
\\
\\
{code}
+if (REQUIRE_CMX)
+    # set(CMX_INCLUDE_DIR "${D}/io/compress/cmx")
+    set(CMX_SOURCE_FILES
+          "${D}/io/compress/cmx/vle.cpp"
+          "${D}/io/compress/cmx/scmx.cpp"
+          "${D}/io/compress/cmx/CmxCompressor.cpp"
+          "${D}/io/compress/cmx/CmxDecompressor.cpp"
+          "${D}/io/compress/cmx/endianness.cpp")
+else (REQUIRE_CMX)
+    # set(CMX_INCLUDE_DIR "")
+    set(CMX_SOURCE_FILES "")
+endif (REQUIRE_CMX)
{code}
\\
'CmxCompressor.cpp' and 'CmxDecompressor.cpp' are the usual Java JNI interface 
classes, while 'scmx.cpp' is the core of CMX. 'endianness.cpp' is included for 
support of ppc (BigEndian) architectures.

> Contribute CMX compression
> --------------------------
>
>                 Key: HADOOP-11644
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11644
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: io
>            Reporter: Xabriel J Collazo Mojica
>            Assignee: Xabriel J Collazo Mojica
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Hadoop natively supports four main compression algorithms: BZIP2, LZ4, Snappy 
> and ZLIB.
> Each one of these algorithms fills a gap:
> bzip2 : Very high compression ratio, splittable
> LZ4 : Very fast, non splittable
> Snappy : Very fast, non splittable
> zLib : good balance of compression and speed.
> We think there is a gap for a compression algorithm that can perform fast 
> compress and decompress, while also being splittable. This can help 
> significantly on jobs where the input file sizes are >= 1GB.
> For this, IBM has developed CMX. CMX is a dictionary-based, block-oriented, 
> splittable, concatenable compression algorithm developed specifically for 
> Hadoop workloads. Many of our customers use CMX, and we would love to be able 
> to contribute it to hadoop-common. 
> CMX is block oriented : We typically use 64k blocks. Blocks are independently 
> decompressable.
> CMX is splittable : We implement the SplittableCompressionCodec interface. 
> All CMX files are a multiple of 64k, so the splittability is achieved in a 
> simple way with no need for external indexes.
> CMX is concatenable : Two independent CMX files can be concatenated together. 
> We have seen that some projects like Apache Flume require this feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to