Yes, IgniteNode runs on the DataNode machine.
[[email protected] ignite]$ echo $HADOOP_HOME
/opt/hadoop-2.8.1-all
[[email protected] ignite]$ echo $IGNITE_HOME
/opt/apache-ignite-hadoop-2.2.0-bin
and in ignite.sh
JVM_OPTS="${JVM_OPTS}
-Djava.library.path=${HADOOP_HOME}/lib/native:/usr/lib64/libsnappy.so.1:${HADOOP_HOME}/lib/native/libhadoop.so"
But exception is thrown as mentioned.
________________________________
From: Evgenii Zhuravlev <[email protected]>
Sent: 17 October 2017 15:44
To: [email protected]
Subject: Re: Hadoop Accelerator doesn't work when use SnappyCodec compression
Do you run Ignite on the same machine as hadoop?
I'd recommend you to check these env variables:
IGNITE_HOME, HADOOP_HOME and JAVA_LIBRARY_PATH. JAVA_LIBRARY_PATH should
contain a path to the folder of libsnappy files.
Evgenii
2017-10-17 8:45 GMT+03:00 C Reid
<[email protected]<mailto:[email protected]>>:
Hi Evgenii,
Checked, as shown:
17/10/17 13:43:12 DEBUG util.NativeCodeLoader: Trying to load the custom-built
native-hadoop library...
17/10/17 13:43:12 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
17/10/17 13:43:12 WARN bzip2.Bzip2Factory: Failed to load/initialize
native-bzip2 library system-native, will use pure-Java version
17/10/17 13:43:12 INFO zlib.ZlibFactory: Successfully loaded & initialized
native-zlib library
Native library checking:
hadoop: true /opt/hadoop-2.8.1-all/lib/native/libhadoop.so
zlib: true /lib64/libz.so.1
snappy: true /usr/lib64/libsnappy.so.1
lz4: true revision:10301
bzip2: false
openssl: true /usr/lib64/libcrypto.so
________________________________
From: Evgenii Zhuravlev
<[email protected]<mailto:[email protected]>>
Sent: 17 October 2017 13:34
To: [email protected]<mailto:[email protected]>
Subject: Re: Hadoop Accelerator doesn't work when use SnappyCodec compression
Hi,
Have you checked "hadoop checknative -a" ? What it shows for snappy?
Evgenii
2017-10-17 7:12 GMT+03:00 C Reid
<[email protected]<mailto:[email protected]>>:
Hi all igniters,
I have tried many ways to include native jar and snappy jar, but exceptions
below kept thrown. (I'm sure the hdfs and yarn support snappy by running job in
yarn framework with SnappyCodec.) Hopes to get some helps and suggestions from
community.
[NativeCodeLoader] Unable to load native-hadoop library for your platform...
using builtin-java classes where applicable
and
java.lang.UnsatisfiedLinkError:
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native
Method)
at
org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
at
org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136)
at
org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150)
at
org.apache.hadoop.io.compress.CompressionCodec$Util.createOutputStreamWithCodecPool(CompressionCodec.java:131)
at
org.apache.hadoop.io.compress.SnappyCodec.createOutputStream(SnappyCodec.java:101)
at
org.apache.hadoop.mapreduce.li<http://org.apache.hadoop.mapreduce.li>b.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:126)
at
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.impl.v2.HadoopV2Task.prepareWriter(HadoopV2Task.java:104)
at
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.impl.v2.HadoopV2ReduceTask.run0(HadoopV2ReduceTask.java:64)
at
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.impl.v2.HadoopV2Task.run(HadoopV2Task.java:55)
at
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.impl.v2.HadoopV2TaskContext.run(HadoopV2TaskContext.java:266)
at
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.taskexecutor.HadoopRunnableTask.runTask(HadoopRunnableTask.java:209)
at
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.taskexecutor.HadoopRunnableTask.call0(HadoopRunnableTask.java:144)
at
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:116)
at
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:114)
at
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.impl.v2.HadoopV2TaskContext.runAsJobOwner(HadoopV2TaskContext.java:573)
at
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:114)
at
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:46)
at
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.taskexecutor.HadoopExecutorService$2.body(HadoopExecutorService.java:186)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
Regards,
RC.