Yah, i tried all those methods found on the Google, and results were the same.

Also because it's just an "export LD_LIBRARY_PATH=..." expression in 
'ignite.sh', i'm not sure it takes efforts or not on a grid start up.

We are planing to run more than 1000+ grids in a cluster, but production env 
has plenty of .snappy file, i'm struggling now...
Btw, my hadoop version is 2.6.0, does it matter?

Thanks for your patience.
________________________________
From: Evgenii Zhuravlev <[email protected]>
Sent: 19 October 2017 17:07
To: [email protected]
Subject: Re: Hadoop Accelerator doesn't work when use SnappyCodec compression

Could you also try to set LD_LIBRARY_PATH variable with path to the folder with 
native libraries?

2017-10-17 17:56 GMT+03:00 C Reid 
<[email protected]<mailto:[email protected]>>:
I just tried, got the same:
"Unable to load native-hadoop library for your platform... using builtin-java 
classes where applicable"
"java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z"

I also tried adding all related native library under one of folders under jdk 
where all *.so are located. But ignite just couldn't load them, it's strange.
________________________________
From: Evgenii Zhuravlev 
<[email protected]<mailto:[email protected]>>
Sent: 17 October 2017 21:25

To: [email protected]<mailto:[email protected]>
Subject: Re: Hadoop Accelerator doesn't work when use SnappyCodec compression

Have you tried to remove from path libraries from ${HADOOP_HOME}/lib/native and 
add only /usr/lib64/ folder?

2017-10-17 12:18 GMT+03:00 C Reid 
<[email protected]<mailto:[email protected]>>:
Tried, and did not work.

________________________________
From: Evgenii Zhuravlev 
<[email protected]<mailto:[email protected]>>
Sent: 17 October 2017 16:41
To: C Reid
Subject: Re: Hadoop Accelerator doesn't work when use SnappyCodec compression

I'd recommend adding /usr/lib64/ to JAVA_LIBRARY_PATH

Evgenii

2017-10-17 11:29 GMT+03:00 C Reid 
<[email protected]<mailto:[email protected]>>:
Yes, IgniteNode runs on the DataNode machine.

[[email protected]<mailto:[email protected]>
 ignite]$ echo $HADOOP_HOME
/opt/hadoop-2.8.1-all
[[email protected]<mailto:[email protected]>
 ignite]$ echo $IGNITE_HOME
/opt/apache-ignite-hadoop-2.2.0-bin

and in ignite.sh
JVM_OPTS="${JVM_OPTS} 
-Djava.library.path=${HADOOP_HOME}/lib/native:/usr/lib64/libsnappy.so.1:${HADOOP_HOME}/lib/native/libhadoop.so"

But exception is thrown as mentioned.
________________________________
From: Evgenii Zhuravlev 
<[email protected]<mailto:[email protected]>>
Sent: 17 October 2017 15:44

To: [email protected]<mailto:[email protected]>
Subject: Re: Hadoop Accelerator doesn't work when use SnappyCodec compression

Do you run Ignite on the same machine as hadoop?

I'd recommend you to check these env variables:
IGNITE_HOME, HADOOP_HOME and JAVA_LIBRARY_PATH. JAVA_LIBRARY_PATH should 
contain a path to the folder of libsnappy files.

Evgenii

2017-10-17 8:45 GMT+03:00 C Reid 
<[email protected]<mailto:[email protected]>>:
Hi Evgenii,

Checked, as shown:

17/10/17 13:43:12 DEBUG util.NativeCodeLoader: Trying to load the custom-built 
native-hadoop library...
17/10/17 13:43:12 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
17/10/17 13:43:12 WARN bzip2.Bzip2Factory: Failed to load/initialize 
native-bzip2 library system-native, will use pure-Java version
17/10/17 13:43:12 INFO zlib.ZlibFactory: Successfully loaded & initialized 
native-zlib library
Native library checking:
hadoop:  true /opt/hadoop-2.8.1-all/lib/native/libhadoop.so
zlib:    true /lib64/libz.so.1
snappy:  true /usr/lib64/libsnappy.so.1
lz4:     true revision:10301
bzip2:   false
openssl: true /usr/lib64/libcrypto.so

________________________________
From: Evgenii Zhuravlev 
<[email protected]<mailto:[email protected]>>
Sent: 17 October 2017 13:34
To: [email protected]<mailto:[email protected]>
Subject: Re: Hadoop Accelerator doesn't work when use SnappyCodec compression

Hi,

Have you checked "hadoop checknative -a" ? What it shows for snappy?

Evgenii

2017-10-17 7:12 GMT+03:00 C Reid 
<[email protected]<mailto:[email protected]>>:
Hi all igniters,

I have tried many ways to include native jar and snappy jar, but exceptions 
below kept thrown. (I'm sure the hdfs and yarn support snappy by running job in 
yarn framework with SnappyCodec.) Hopes to get some helps and suggestions from 
community.

[NativeCodeLoader] Unable to load native-hadoop library for your platform... 
using builtin-java classes where applicable

and

java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
        at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native 
Method)
        at 
org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
        at 
org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136)
        at 
org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150)
        at 
org.apache.hadoop.io.compress.CompressionCodec$Util.createOutputStreamWithCodecPool(CompressionCodec.java:131)
        at 
org.apache.hadoop.io.compress.SnappyCodec.createOutputStream(SnappyCodec.java:101)
        at 
org.apache.hadoop.mapreduce.li<http://org.apache.hadoop.mapreduce.li>b.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:126)
        at 
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.impl.v2.HadoopV2Task.prepareWriter(HadoopV2Task.java:104)
        at 
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.impl.v2.HadoopV2ReduceTask.run0(HadoopV2ReduceTask.java:64)
        at 
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.impl.v2.HadoopV2Task.run(HadoopV2Task.java:55)
        at 
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.impl.v2.HadoopV2TaskContext.run(HadoopV2TaskContext.java:266)
        at 
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.taskexecutor.HadoopRunnableTask.runTask(HadoopRunnableTask.java:209)
        at 
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.taskexecutor.HadoopRunnableTask.call0(HadoopRunnableTask.java:144)
        at 
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:116)
        at 
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:114)
        at 
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.impl.v2.HadoopV2TaskContext.runAsJobOwner(HadoopV2TaskContext.java:573)
        at 
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:114)
        at 
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:46)
        at 
org.apache.ignite.internal.pro<http://org.apache.ignite.internal.pro>cessors.hadoop.taskexecutor.HadoopExecutorService$2.body(HadoopExecutorService.java:186)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)


Regards,

RC.





Reply via email to