Have you tried to remove from path libraries from ${HADOOP_HOME}/lib/native
and add only /usr/lib64/ folder?

2017-10-17 12:18 GMT+03:00 C Reid <[email protected]>:

> Tried, and did not work.
>
> ------------------------------
> *From:* Evgenii Zhuravlev <[email protected]>
> *Sent:* 17 October 2017 16:41
> *To:* C Reid
> *Subject:* Re: Hadoop Accelerator doesn't work when use SnappyCodec
> compression
>
> I'd recommend adding /usr/lib64/ to JAVA_LIBRARY_PATH
>
> Evgenii
>
> 2017-10-17 11:29 GMT+03:00 C Reid <[email protected]>:
>
>> Yes, IgniteNode runs on the DataNode machine.
>>
>> [[email protected] ignite]$ echo $HADOOP_HOME
>> /opt/hadoop-2.8.1-all
>> [[email protected] ignite]$ echo $IGNITE_HOME
>> /opt/apache-ignite-hadoop-2.2.0-bin
>>
>> and in ignite.sh
>> JVM_OPTS="${JVM_OPTS} -Djava.library.path=${HADOOP_H
>> OME}/lib/native:/usr/lib64/libsnappy.so.1:${HADOOP_HOME}/lib
>> /native/libhadoop.so"
>>
>> But exception is thrown as mentioned.
>> ------------------------------
>> *From:* Evgenii Zhuravlev <[email protected]>
>> *Sent:* 17 October 2017 15:44
>>
>> *To:* [email protected]
>> *Subject:* Re: Hadoop Accelerator doesn't work when use SnappyCodec
>> compression
>>
>> Do you run Ignite on the same machine as hadoop?
>>
>> I'd recommend you to check these env variables:
>> IGNITE_HOME, HADOOP_HOME and JAVA_LIBRARY_PATH. JAVA_LIBRARY_PATH should
>> contain a path to the folder of libsnappy files.
>>
>> Evgenii
>>
>> 2017-10-17 8:45 GMT+03:00 C Reid <[email protected]>:
>>
>>> Hi Evgenii,
>>>
>>> Checked, as shown:
>>>
>>> 17/10/17 13:43:12 DEBUG util.NativeCodeLoader: Trying to load the
>>> custom-built native-hadoop library...
>>> 17/10/17 13:43:12 DEBUG util.NativeCodeLoader: Loaded the native-hadoop
>>> library
>>> 17/10/17 13:43:12 WARN bzip2.Bzip2Factory: Failed to load/initialize
>>> native-bzip2 library system-native, will use pure-Java version
>>> 17/10/17 13:43:12 INFO zlib.ZlibFactory: Successfully loaded &
>>> initialized native-zlib library
>>> Native library checking:
>>> hadoop:  true /opt/hadoop-2.8.1-all/lib/native/libhadoop.so
>>> zlib:    true /lib64/libz.so.1
>>> snappy:  true /usr/lib64/libsnappy.so.1
>>> lz4:     true revision:10301
>>> bzip2:   false
>>> openssl: true /usr/lib64/libcrypto.so
>>>
>>> ------------------------------
>>> *From:* Evgenii Zhuravlev <[email protected]>
>>> *Sent:* 17 October 2017 13:34
>>> *To:* [email protected]
>>> *Subject:* Re: Hadoop Accelerator doesn't work when use SnappyCodec
>>> compression
>>>
>>> Hi,
>>>
>>> Have you checked "hadoop checknative -a" ? What it shows for snappy?
>>>
>>> Evgenii
>>>
>>> 2017-10-17 7:12 GMT+03:00 C Reid <[email protected]>:
>>>
>>>> Hi all igniters,
>>>>
>>>> I have tried many ways to include native jar and snappy jar, but
>>>> exceptions below kept thrown. (I'm sure the hdfs and yarn support snappy by
>>>> running job in yarn framework with SnappyCodec.) Hopes to get some helps
>>>> and suggestions from community.
>>>>
>>>> [NativeCodeLoader] Unable to load native-hadoop library for your
>>>> platform... using builtin-java classes where applicable
>>>>
>>>> and
>>>>
>>>> java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeC
>>>> odeLoader.buildSupportsSnappy()Z
>>>>         at 
>>>> org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native
>>>> Method)
>>>>         at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoa
>>>> ded(SnappyCodec.java:63)
>>>>         at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(
>>>> SnappyCodec.java:136)
>>>>         at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecP
>>>> ool.java:150)
>>>>         at org.apache.hadoop.io.compress.CompressionCodec$Util.createOu
>>>> tputStreamWithCodecPool(CompressionCodec.java:131)
>>>>         at org.apache.hadoop.io.compress.SnappyCodec.createOutputStream
>>>> (SnappyCodec.java:101)
>>>>         at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getR
>>>> ecordWriter(TextOutputFormat.java:126)
>>>>         at org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV
>>>> 2Task.prepareWriter(HadoopV2Task.java:104)
>>>>         at org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV
>>>> 2ReduceTask.run0(HadoopV2ReduceTask.java:64)
>>>>         at org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV
>>>> 2Task.run(HadoopV2Task.java:55)
>>>>         at org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV
>>>> 2TaskContext.run(HadoopV2TaskContext.java:266)
>>>>         at org.apache.ignite.internal.processors.hadoop.taskexecutor.Ha
>>>> doopRunnableTask.runTask(HadoopRunnableTask.java:209)
>>>>         at org.apache.ignite.internal.processors.hadoop.taskexecutor.Ha
>>>> doopRunnableTask.call0(HadoopRunnableTask.java:144)
>>>>         at org.apache.ignite.internal.processors.hadoop.taskexecutor.Ha
>>>> doopRunnableTask$1.call(HadoopRunnableTask.java:116)
>>>>         at org.apache.ignite.internal.processors.hadoop.taskexecutor.Ha
>>>> doopRunnableTask$1.call(HadoopRunnableTask.java:114)
>>>>         at org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV
>>>> 2TaskContext.runAsJobOwner(HadoopV2TaskContext.java:573)
>>>>         at org.apache.ignite.internal.processors.hadoop.taskexecutor.Ha
>>>> doopRunnableTask.call(HadoopRunnableTask.java:114)
>>>>         at org.apache.ignite.internal.processors.hadoop.taskexecutor.Ha
>>>> doopRunnableTask.call(HadoopRunnableTask.java:46)
>>>>         at org.apache.ignite.internal.processors.hadoop.taskexecutor.Ha
>>>> doopExecutorService$2.body(HadoopExecutorService.java:186)
>>>>         at org.apache.ignite.internal.util.worker.GridWorker.run(GridWo
>>>> rker.java:110)
>>>>
>>>>
>>>> Regards,
>>>>
>>>> RC.
>>>>
>>>
>>>
>>
>

Reply via email to