Hi all,
I'm trying to run a spark job (written in scala) that uses addFile to
download some small files to each node. However, one of the downloaded
files has an incorrect size (the other ones are ok), which causes an error
when using it in the code.
I have looked more into the issue and
t; found out that I did not reply to the group in my original reply.
>
>
>
> *From:* Prajod S Vettiyattil (WT01 - BAS)
> *Sent:* 15 October 2015 11:45
> *To:* 'Bernardo Vecchia Stein' <bernardovst...@gmail.com>
> *Subject:* RE: Running in cluster mode causes nati
Hello,
I am trying to run some scala code in cluster mode using spark-submit. This
code uses addLibrary to link with a .so that exists in the machine, and
this library has a function to be called natively (there's a native
definition as needed in the code).
The problem I'm facing is: whenever I
;
> Deenar
>
>
>
> On 14 October 2015 at 05:44, Bernardo Vecchia Stein <
> bernardovst...@gmail.com> wrote:
>
>> Hello,
>>
>> I am trying to run some scala code in cluster mode using spark-submit.
>> This code uses addLibrary to link with a .so
D_LIBRARY_PATH set, and I am able to
>> execute without issues. Maybe you'd like to double check paths, env
>> variables, or the parameters spark.driver.extraLibraryPath,
>> spark.executor.extraLibraryPath.
>>
>>
>> Best,
>>
>> Renato M.
>>
>&g
14 October 2015 at 16:28, Renato MarroquĂn Mogrovejo <
renatoj.marroq...@gmail.com> wrote:
> You can also try setting the env variable LD_LIBRARY_PATH to point where
> your compiled libraries are.
>
>
> Renato M.
>
> 2015-10-14 21:07 GMT+02:00 Bernardo Vecchia Stein &