Hi Bernardo,

So is this in distributed mode? or single node? Maybe fix the issue with a
single node first ;)
You are right that Spark finds the library but not the *.so file. I also
use System.load(<LIBRARY_NAME>) with LD_LIBRARY_PATH set, and I am able to
execute without issues. Maybe you'd like to double check paths, env
variables, or the parameters spark.driver.extraLibraryPath,
spark.executor.extraLibraryPath.


Best,

Renato M.

2015-10-14 21:40 GMT+02:00 Bernardo Vecchia Stein <bernardovst...@gmail.com>
:

> Hi Renato,
>
> I have done that as well, but so far no luck. I believe spark is finding
> the library correctly, otherwise the error message would be "no libraryname
> found" or something like that. The problem seems to be something else, and
> I'm not sure how to find it.
>
> Thanks,
> Bernardo
>
> On 14 October 2015 at 16:28, Renato Marroquín Mogrovejo <
> renatoj.marroq...@gmail.com> wrote:
>
>> You can also try setting the env variable LD_LIBRARY_PATH to point where
>> your compiled libraries are.
>>
>>
>> Renato M.
>>
>> 2015-10-14 21:07 GMT+02:00 Bernardo Vecchia Stein <
>> bernardovst...@gmail.com>:
>>
>>> Hi Deenar,
>>>
>>> Yes, the native library is installed on all machines of the cluster. I
>>> tried a simpler approach by just using System.load() and passing the exact
>>> path of the library, and things still won't work (I get exactly the same
>>> error and message).
>>>
>>> Any ideas of what might be failing?
>>>
>>> Thank you,
>>> Bernardo
>>>
>>> On 14 October 2015 at 02:50, Deenar Toraskar <deenar.toras...@gmail.com>
>>> wrote:
>>>
>>>> Hi Bernardo
>>>>
>>>> Is the native library installed on all machines of your cluster and are
>>>> you setting both the spark.driver.extraLibraryPath and
>>>> spark.executor.extraLibraryPath ?
>>>>
>>>> Deenar
>>>>
>>>>
>>>>
>>>> On 14 October 2015 at 05:44, Bernardo Vecchia Stein <
>>>> bernardovst...@gmail.com> wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I am trying to run some scala code in cluster mode using spark-submit.
>>>>> This code uses addLibrary to link with a .so that exists in the machine,
>>>>> and this library has a function to be called natively (there's a native
>>>>> definition as needed in the code).
>>>>>
>>>>> The problem I'm facing is: whenever I try to run this code in cluster
>>>>> mode, spark fails with the following message when trying to execute the
>>>>> native function:
>>>>> java.lang.UnsatisfiedLinkError:
>>>>> org.name.othername.ClassName.nativeMethod([B[B)[B
>>>>>
>>>>> Apparently, the library is being found by spark, but the required
>>>>> function isn't found.
>>>>>
>>>>> When trying to run in client mode, however, this doesn't fail and
>>>>> everything works as expected.
>>>>>
>>>>> Does anybody have any idea of what might be the problem here? Is there
>>>>> any bug that could be related to this when running in cluster mode?
>>>>>
>>>>> I appreciate any help.
>>>>> Thanks,
>>>>> Bernardo
>>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to