there is a hack for this issue: copying my native library to
$HADOOP_HOME/lib/native makes it discoverable and a program runs, however
this is not an appropriate solution and it seems to be fragile.

I tried to find where 'lib/native' path appears in the configuration and
found 2 places:
hadoop-env.sh: export
JAVA_LIBRARY_PATH="$JAVA_LIBRARY_PATH:/usr/lib/hadoop-lzo/lib/native
mapred-site.xml: key: mapreduce.admin.user.env

I tried to add path to dir with my native lib in both places, but still no
luck.

Thanks,
Timur

On Wed, Apr 6, 2016 at 11:21 PM, Timur Fayruzov <timur.fairu...@gmail.com>
wrote:

> Hello,
>
> I'm not sure whether it's a Hadoop or Flink-specific question, but since I
> ran into this in the context of Flink I'm asking here. I would be glad if
> anyone can suggest a more appropriate place.
>
> I have a native library that I need to use in my Flink batch job that I
> run on EMR, and I try to point JVM to the location of native library.
> Normally, I'd do this using java.library.path parameter. So I try to run as
> follows:
> `
> HADOOP_CONF_DIR=/etc/hadoop/conf
> JVM_ARGS=-Djava.library.path=<native_lib_dir> flink-1.0.0/bin/flink run -m
> yarn-cluster -yn 1 -yjm 768 -ytm 768 <my.jar>
> `
> It does not work, fails with `java.lang.UnsatisfiedLinkError` when trying
> to load the native lib. It probably has to do with YARN not not passing
> this parameter to task nodes, but my understanding of this mechanism is
> quite limited so far.
>
> I dug up this Jira ticket:
> https://issues.apache.org/jira/browse/MAPREDUCE-3693, but setting
> LD_LIBRARY_PATH in mapreduce.admin.user.env did not solve the problem
> either.
>
> Any help or hint where to look is highly appreciated.
>
> Thanks,
> Timur
>

Reply via email to