You can also try setting the env variable LD_LIBRARY_PATH to point where
your compiled libraries are.


Renato M.

2015-10-14 21:07 GMT+02:00 Bernardo Vecchia Stein <bernardovst...@gmail.com>
:

> Hi Deenar,
>
> Yes, the native library is installed on all machines of the cluster. I
> tried a simpler approach by just using System.load() and passing the exact
> path of the library, and things still won't work (I get exactly the same
> error and message).
>
> Any ideas of what might be failing?
>
> Thank you,
> Bernardo
>
> On 14 October 2015 at 02:50, Deenar Toraskar <deenar.toras...@gmail.com>
> wrote:
>
>> Hi Bernardo
>>
>> Is the native library installed on all machines of your cluster and are
>> you setting both the spark.driver.extraLibraryPath and
>> spark.executor.extraLibraryPath ?
>>
>> Deenar
>>
>>
>>
>> On 14 October 2015 at 05:44, Bernardo Vecchia Stein <
>> bernardovst...@gmail.com> wrote:
>>
>>> Hello,
>>>
>>> I am trying to run some scala code in cluster mode using spark-submit.
>>> This code uses addLibrary to link with a .so that exists in the machine,
>>> and this library has a function to be called natively (there's a native
>>> definition as needed in the code).
>>>
>>> The problem I'm facing is: whenever I try to run this code in cluster
>>> mode, spark fails with the following message when trying to execute the
>>> native function:
>>> java.lang.UnsatisfiedLinkError:
>>> org.name.othername.ClassName.nativeMethod([B[B)[B
>>>
>>> Apparently, the library is being found by spark, but the required
>>> function isn't found.
>>>
>>> When trying to run in client mode, however, this doesn't fail and
>>> everything works as expected.
>>>
>>> Does anybody have any idea of what might be the problem here? Is there
>>> any bug that could be related to this when running in cluster mode?
>>>
>>> I appreciate any help.
>>> Thanks,
>>> Bernardo
>>>
>>
>>
>

Reply via email to