Thanks Jakob for sharing the link. Will try it out.
Regards,
Vineet
On Tue, Dec 13, 2016 at 3:00 PM, Jakob Odersky wrote:
> Hi Vineet,
> great to see you solved the problem! Since this just appeared in my
> inbox, I wanted to take the opportunity for a shameless plug:
> https://github.com/joder
Hi Vineet,
great to see you solved the problem! Since this just appeared in my
inbox, I wanted to take the opportunity for a shameless plug:
https://github.com/jodersky/sbt-jni. In case you're using sbt and also
developing the native library, this plugin may help with the pains of
building and pack
Thanks Steve and Kant. Apologies for late reply as I was out for vacation.
Got it working. For other users:
def loadResources() {
System.loadLibrary("foolib")
val MyInstance = new MyClass
val retstr = MyInstance.foo("mystring") // method trying to invoke
}
va
On 27 Nov 2016, at 02:55, kant kodali
mailto:kanth...@gmail.com>> wrote:
I would say instead of LD_LIBRARY_PATH you might want to use java.library.path
in the following way
java -Djava.library.path=/path/to/my/library or pass java.library.path along
with spark-submit
This is only going to s
I would say instead of LD_LIBRARY_PATH you might want to use java.library.
path
in the following way
java -Djava.library.path=/path/to/my/library or pass java.library.path
along with spark-submit
On Sat, Nov 26, 2016 at 6:44 PM, Gmail wrote:
> Maybe you've already checked these out. Some basic
Maybe you've already checked these out. Some basic questions that come to my
mind are:
1) is this library "foolib" or "foo-C-library" available on the worker node?
2) if yes, is it accessible by the user/program (rwx)?
Thanks,
Vasu.
> On Nov 26, 2016, at 5:08 PM, kant kodali wrote:
>
> If it
If it is working for standalone program I would think you can apply the
same settings across all the spark worker and client machines and give
that a try. Lets start with that.
On Sat, Nov 26, 2016 at 11:59 AM, vineet chadha
wrote:
> Just subscribed to Spark User. So, forwarding message again
Just subscribed to Spark User. So, forwarding message again.
On Sat, Nov 26, 2016 at 11:50 AM, vineet chadha
wrote:
> Thanks Kant. Can you give me a sample program which allows me to call jni
> from executor task ? I have jni working in standalone program in
> scala/java.
>
> Regards,
> Vine
Yes this is a Java JNI question. Nothing to do with Spark really.
java.lang.UnsatisfiedLinkError typically would mean the way you setup
LD_LIBRARY_PATH
is wrong unless you tell us that it is working for other cases but not this
one.
On Sat, Nov 26, 2016 at 11:23 AM, Reynold Xin wrote:
> That's
That's just standard JNI and has nothing to do with Spark, does it?
On Sat, Nov 26, 2016 at 11:19 AM, vineet chadha
wrote:
> Thanks Reynold for quick reply.
>
> I have tried following:
>
> class MySimpleApp {
> // ---Native methods
> @native def fooMethod (foo: String): String
> }
>
> objec
bcc dev@ and add user@
This is more a user@ list question rather than a dev@ list question. You
can do something like this:
object MySimpleApp {
def loadResources(): Unit = // define some idempotent way to load
resources, e.g. with a flag or lazy val
def main() = {
...
sc.paralleli
I am happy to report that after set spark.dirver.userClassPathFirst, I can use
protobuf 3 with spark-shell. Looks like the classloading issue in the driver,
not executor.
Marcelo, thank you very much for the tip!
Lan
> On Sep 15, 2015, at 1:40 PM, Marcelo Vanzin wrote:
>
> Hi,
>
> Just "s
Hi,
Just "spark.executor.userClassPathFirst" is not enough. You should
also set "spark.driver.userClassPathFirst". Also not that I don't
think this was really tested with the shell, but that should work with
regular apps started using spark-submit.
If that doesn't work, I'd recommend shading, as
15 Sep 2015 09:33:40 -0500
> Subject: Re: Change protobuf version or any other third party library version
> in Spark application
> From: ljia...@gmail.com
> To: java8...@hotmail.com
> CC: ste...@hortonworks.com; user@spark.apache.org
>
> Steve,
>
> Thanks for the inpu
If you use Standalone mode, just start spark-shell like following:
spark-shell --jars your_uber_jar --conf spark.files.userClassPathFirst=true
Yong
Date: Tue, 15 Sep 2015 09:33:40 -0500
Subject: Re: Change protobuf version or any other third party library version
in Spark application
From: ljia
ssues.apache.org/jira/browse/SPARK-2996
> <https://issues.apache.org/jira/browse/SPARK-2996>
>
> Yong
>
> Subject: Re: Change protobuf version or any other third party library version
> in Spark application
> From: ste...@hortonworks.com <mailto:ste...@hortonworks.
mode, check this for the parameter:
>
> https://issues.apache.org/jira/browse/SPARK-2996
>
> Yong
>
> ------
> Subject: Re: Change protobuf version or any other third party library
> version in Spark application
> From: ste...@hortonworks.com
>
arameter:
https://issues.apache.org/jira/browse/SPARK-2996
Yong
Subject: Re: Change protobuf version or any other third party library version
in Spark application
From: ste...@hortonworks.com
To: ljia...@gmail.com
CC: user@spark.apache.org
Date: Tue, 15 Sep 2015 09:19:28 +
On 15
On 15 Sep 2015, at 05:47, Lan Jiang
mailto:ljia...@gmail.com>> wrote:
Hi, there,
I am using Spark 1.4.1. The protobuf 2.5 is included by Spark 1.4.1 by default.
However, I would like to use Protobuf 3 in my spark application so that I can
use some new features such as Map support. Is there
Hi, there,
I am using Spark 1.4.1. The protobuf 2.5 is included by Spark 1.4.1 by
default. However, I would like to use Protobuf 3 in my spark application so
that I can use some new features such as Map support. Is there anyway to
do that?
Right now if I build a uber.jar with dependencies includ
I'm adding this 3rd party library to my Maven pom.xml file so that it's
embedded into the JAR I send to spark-submit:
json-mapreduce
json-mapreduce
1.0-SNAPSHOT
javax.servlet
*
commons-io
*
That could be a corner case bug. How do you add the 3rd party library to
the class path of the driver? Through spark-submit? Could you give the
command you used?
TD
On Wed, Mar 4, 2015 at 12:42 AM, Emre Sevinc wrote:
> I've also tried the following:
>
> Configuration hadoopConfiguration = n
I've also tried the following:
Configuration hadoopConfiguration = new Configuration();
hadoopConfiguration.set("multilinejsoninputformat.member", "itemSet");
JavaStreamingContext ssc =
JavaStreamingContext.getOrCreate(checkpointDirectory, hadoopConfiguration,
factory, false);
but I
Hello,
I have a Spark Streaming application (that uses Spark 1.2.1) that listens
to an input directory, and when new JSON files are copied to that directory
processes them, and writes them to an output directory.
It uses a 3rd party library to process the multi-line JSON files (
https://github.co
24 matches
Mail list logo