Hi Evgenii, Checked, as shown:
17/10/17 13:43:12 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 17/10/17 13:43:12 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library 17/10/17 13:43:12 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version 17/10/17 13:43:12 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library Native library checking: hadoop: true /opt/hadoop-2.8.1-all/lib/native/libhadoop.so zlib: true /lib64/libz.so.1 snappy: true /usr/lib64/libsnappy.so.1 lz4: true revision:10301 bzip2: false openssl: true /usr/lib64/libcrypto.so ________________________________ From: Evgenii Zhuravlev <[email protected]> Sent: 17 October 2017 13:34 To: [email protected] Subject: Re: Hadoop Accelerator doesn't work when use SnappyCodec compression Hi, Have you checked "hadoop checknative -a" ? What it shows for snappy? Evgenii 2017-10-17 7:12 GMT+03:00 C Reid <[email protected]<mailto:[email protected]>>: Hi all igniters, I have tried many ways to include native jar and snappy jar, but exceptions below kept thrown. (I'm sure the hdfs and yarn support snappy by running job in yarn framework with SnappyCodec.) Hopes to get some helps and suggestions from community. [NativeCodeLoader] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable and java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method) at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63) at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136) at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150) at org.apache.hadoop.io.compress.CompressionCodec$Util.createOutputStreamWithCodecPool(CompressionCodec.java:131) at org.apache.hadoop.io.compress.SnappyCodec.createOutputStream(SnappyCodec.java:101) at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:126) at org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Task.prepareWriter(HadoopV2Task.java:104) at org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2ReduceTask.run0(HadoopV2ReduceTask.java:64) at org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Task.run(HadoopV2Task.java:55) at org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.run(HadoopV2TaskContext.java:266) at org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.runTask(HadoopRunnableTask.java:209) at org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call0(HadoopRunnableTask.java:144) at org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:116) at org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:114) at org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.runAsJobOwner(HadoopV2TaskContext.java:573) at org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:114) at org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:46) at org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopExecutorService$2.body(HadoopExecutorService.java:186) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) Regards, RC.
