I have a spark program that uses dataframes to query hive and I run it both
as a spark-shell for exploration and I have a runner class that executes
some tasks with spark-submit. I used to run against 1.4.0-SNAPSHOT. Since
then 1.4.0 and 1.4.1 were released so I tried to switch to the official
Thanks for the reply.
Actually, I don't think excluding spark-hive from spark-submit --packages
is a good idea.
I don't want to recompile spark by assembly for my cluster, every time a
new spark release is out.
I prefer using binary version of spark and then adding some jars for job
/bin/spark-submit \
--class mgm.tp.bigdata.ma_spark.SparkMain \
--master yarn-cluster \
--executor-memory 9G \
--total-executor-cores 16 \
ma-spark.jar \
1000
maybe my configuration is not the optimal??
best regards,
paul
t;:
> when I do run this command:
>
> ashutosh@pas-lab-server7:~/spark-1.4.0$ ./bin/spark-submit \
> > --class org.apache.spark.graphx.lib.Analytics \
> > --master spark://172.17.27.12:7077 \
> > assembly/target/scala-2.10/spark-assembly-1.4.0-hadoop2.2.0.jar \
> >
when I do run this command:
ashutosh@pas-lab-server7:~/spark-1.4.0$ ./bin/spark-submit \
> --class org.apache.spark.graphx.lib.Analytics \
> --master spark://172.17.27.12:7077 \
> assembly/target/scala-2.10/spark-assembly-1.4.0-hadoop2.2.0.jar \
> pagerank soc-LiveJournal1.txt --
t;> Hi JG,
>>>>
>>>> One way this can occur is that YARN doesn't have enough resources to
>>>> run your job. Have you verified that it does? Are you able to submit
>>>> using the same command from a node on the cluster?
>>>>
>
t;> On Wed, Jul 8, 2015 at 3:19 PM, jegordon wrote:
>>
>>> I'm trying to submit a spark job from a different server outside of my
>>> Spark
>>> Cluster (running spark 1.4.0, hadoop 2.4.0 and YARN) using the
>>> spark-submit
>>> script :
&
park job from a different server outside of my
> Spark
> Cluster (running spark 1.4.0, hadoop 2.4.0 and YARN) using the spark-submit
> script :
>
> spark/bin/spark-submit --master yarn-client --executor-memory 4G
> myjobScript.py
>
> The think is that my application never pas
I'm trying to submit a spark job from a different server outside of my Spark
Cluster (running spark 1.4.0, hadoop 2.4.0 and YARN) using the spark-submit
script :
spark/bin/spark-submit --master yarn-client --executor-memory 4G
myjobScript.py
The think is that my application never pass fro
; that
> spark-submit can not resolve it.
>
> $ ./bin/spark-submit \
> → --packages
>
> org.apache.spark:spark-hive_2.10:1.4.0,org.postgresql:postgresql:9.3-1103-jdbc3,joda-time:joda-time:2.8.1
> \
> → --class fr.leboncoin.etl.jobs.dwh.AdStateTraceDWHTransform \
> → --master
I want to add spark-hive as a dependence to submit my job, but it seems that
spark-submit can not resolve it.
$ ./bin/spark-submit \
→ --packages
org.apache.spark:spark-hive_2.10:1.4.0,org.postgresql:postgresql:9.3-1103-jdbc3,joda-time:joda-time:2.8.1
\
→ --class
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 9 more
__
Hi,
I want to deploy my application on a standalone cluster.
Spark submit acts in strange way. When I deploy the application in
*"client"* mode, everything works well and my application can see the
additional jar files.
Here is the command:
> spark-submit --master spark:/
the rights
> to execute it?
>
> niedz., 28.06.2015 o 04:53 użytkownik Ashish Soni
> napisał:
>
>> Not sure what is the issue but when i run the spark-submit or spark-shell
>> i am getting below error
>>
>> /usr/bin/spark-class: line 24: /usr/bin/load-spark-env.
I assume that /usr/bin/load-spark-env.sh exists. Have you got the rights to
execute it?
niedz., 28.06.2015 o 04:53 użytkownik Ashish Soni
napisał:
> Not sure what is the issue but when i run the spark-submit or spark-shell
> i am getting below error
>
> /usr/bin/spark-class: line
Not sure what is the issue but when i run the spark-submit or spark-shell i
am getting below error
/usr/bin/spark-class: line 24: /usr/bin/load-spark-env.sh: No such file or
directory
Can some one please help
Thanks,
run the Spark example code HBaseTest from command line
>> using spark-submit instead run-example, in that case, I can learn more how
>> to run spark code in general.
>>
>> However, it told me CLASS_NOT_FOUND about htrace since I am using CDH5.4.
>> I successfully loca
Try to add them in the SPARK_CLASSPATH in your conf/spark-env.sh file
Thanks
Best Regards
On Thu, Jun 25, 2015 at 9:31 PM, Bin Wang wrote:
> I am trying to run the Spark example code HBaseTest from command line
> using spark-submit instead run-example, in that case, I can learn more ho
I am trying to run the Spark example code HBaseTest from command line using
spark-submit instead run-example, in that case, I can learn more how to run
spark code in general.
However, it told me CLASS_NOT_FOUND about htrace since I am using CDH5.4. I
successfully located the htrace jar file but I
r? It seems to be getting an assembly jar that is not
> from my project(perhaps from a maven repo). Is there a way to make the ec2
> script use the assembly jar that I created?
>
> Thanks,
> Raghav
>
>
> On Friday, June 19, 2015, Andrew Or wrote:
> Hi Raghav,
>
> I
; build/mvn clean package -DskipTests [...]
> 3. make local changes
> 4. build/mvn package -DskipTests [...] (no need to clean again here)
> 5. bin/spark-submit --master spark://[...] --class your.main.class your.jar
>
> No need to pass in extra --driver-java-options or --driver-ext
Thanks Andrew.
We cannot include Spark in our Java project due to dependency issues. The
Spark will not be exposed to clients.
What we want todo is to put spark tarball (in worst case) into HDFS, so
through our java app which runs in local mode, launch spark-submit script
with user python files
;> 2. cd spark; build/mvn clean package -DskipTests [...]
>> 3. make local changes
>> 4. build/mvn package -DskipTests [...] (no need to clean again here)
>> 5. bin/spark-submit --master spark://[...] --class your.main.class
>> your.jar
>>
>> No need to pass in extra
s
> 4. build/mvn package -DskipTests [...] (no need to clean again here)
> 5. bin/spark-submit --master spark://[...] --class your.main.class your.jar
>
> No need to pass in extra --driver-java-options or --driver-extra-classpath
> as others have suggested. When using spark-submit, th
)
5. bin/spark-submit --master spark://[...] --class your.main.class your.jar
No need to pass in extra --driver-java-options or --driver-extra-classpath
as others have suggested. When using spark-submit, the main jar comes from
assembly/target/scala_2.10, which is prepared through "mvn pa
Hi Elkhan,
Spark submit depends on several things: the launcher jar (1.3.0+ only), the
spark-core jar, and the spark-yarn jar (in your case). Why do you want to
put it in HDFS though? AFAIK you can't execute scripts directly from HDFS;
you need to copy them to a local file system first. I
Hi all,
If I want to ship spark-submit script to HDFS. and then call it from HDFS
location for starting Spark job, which other files/folders/jars need to be
transferred into HDFS with spark-submit script ?
Due to some dependency issues, we can include Spark in our Java
application, so instead we
You can specify the jars of your application to be included with spark-submit
with the /--jars/ switch.
Otherwise, are you sure that your newly compiled spark jar assembly is in
assembly/target/scala-2.10/?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com
> This is not independent programmatic way of running of Spark job on Yarn
cluster.
The example I created simply demonstrates how to wire up the classpath so
that spark submit can be called programmatically. For my use case, I wanted
to hold open a connection so I could send tasks to
.n3.nabble.com/Submitting-Spark-Applications-using-Spark-Submit-tp23352p23395.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional
Hi Elkhan,
There are couple of ways to do this.
1) Spark-jobserver is a popular web server that is used to submit spark jobs.
https://github.com/spark-jobserver/spark-jobserver
<https://github.com/spark-jobserver/spark-jobserver>
2) Spark-submit script sets the classpath for t
>
> On Wed, Jun 17, 2015 at 8:29 PM, Elkhan Dadashov
> wrote:
>
>> Hi all,
>>
>> Is there any way running Spark job in programmatic way on Yarn cluster
>> without using spark-submit script ?
>>
>> I cannot include Spark jars on my Java application (du
> without using spark-submit script ?
>
> I cannot include Spark jars on my Java application (due o dependency
> conflict and other reasons), so I'll be shipping Spark assembly uber jar
> (spark-assembly-1.3.1-hadoop2.3.0.jar) to Yarn cluster, and then execute
> job (Python or
Hi all,
Is there any way running Spark job in programmatic way on Yarn cluster
without using spark-submit script ?
I cannot include Spark jars on my Java application (due o dependency
conflict and other reasons), so I'll be shipping Spark assembly uber jar
(spark-assembly-1.3.1-hadoop2.3.
To clarify, I am using the spark standalone cluster.
On Tuesday, June 16, 2015, Yanbo Liang wrote:
> If you run Spark on YARN, the simplest way is replace the
> $SPARK_HOME/lib/spark-.jar with your own version spark jar file and run
> your application.
> The spark-submit script
If you run Spark on YARN, the simplest way is replace the
$SPARK_HOME/lib/spark-.jar with your own version spark jar file and run
your application.
The spark-submit script will upload this jar to YARN cluster automatically
and then you can run your application as usual.
It does not care about
The documentation says spark.driver.userClassPathFirst can only be used in
cluster mode. Does this mean I have to set the --deploy-mode option for
spark-submit to cluster? Or can I still use the default client? My
understanding is that even in the default deploy mode, spark still uses the
If this is research-only, and you don't want to have to worry about updating
the jars installed by default on the cluster, you can add your custom Spark jar
using the "spark.driver.extraLibraryPath" configuration property when running
spark-submit, and then use t
I made the change so that I could implement top() using treeReduce(). A member
on here suggested I make the change in RDD.scala to accomplish that. Also, this
is for a research project, and not for commercial use.
So, any advice on how I can get the spark submit to use my custom built jars
application using the command line. I used the
spark submit command for doing so. I initially setup my Spark application on
Eclipse and have been making changes on there. I recently obtained my own
version of the Spark source code and added a new method to RDD.scala. I
created a new spark core jar using mvn
I am trying to submit a spark application using the command line. I used the
spark submit command for doing so. I initially setup my Spark application on
Eclipse and have been making changes on there. I recently obtained my own
version of the Spark source code and added a new method to RDD.scala
On 12 June 2015 at 19:39, Ted Yu wrote:
> This is the SPARK JIRA which introduced the warning:
>
> [SPARK-7037] [CORE] Inconsistent behavior for non-spark config properties
> in spark-shell and spark-submit
>
> On Fri, Jun 12, 2015 at 4:34 PM, Peng Cheng wrote:
>
>> Hi A
This is the SPARK JIRA which introduced the warning:
[SPARK-7037] [CORE] Inconsistent behavior for non-spark config properties
in spark-shell and spark-submit
On Fri, Jun 12, 2015 at 4:34 PM, Peng Cheng wrote:
> Hi Andrew,
>
> Thanks a lot! Indeed, it doesn't start with spark,
> removed without a deprecation warning?
>>
>> Thanks a lot for your advices.
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble
Turns out one of the other developers wrapped the jobs in script and did a
cd to another folder in the script before executing spark-submit.
On 12 June 2015 at 14:20, Matthew Jones wrote:
> Hmm either spark-submit isn't picking up the relative path or Chronos is
> not setting y
roperty: xxx.xxx=v
>
> How do set driver's system property in 1.4.0? Is there a reason it is
> removed without a deprecation warning?
>
> Thanks a lot for your advices.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.na
do set driver's system property in 1.4.0? Is there a reason it is
removed without a deprecation warning?
Thanks a lot for your advices.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-4-0-How-to-set-driver-s-system-property-using-spark-submit
Hmm either spark-submit isn't picking up the relative path or Chronos is
not setting your working directory to your sandbox. Try using "cd
$MESOS_SANDBOX && spark-submit --properties-file props.properties"
On Fri, Jun 12, 2015 at 12:32 PM Gary Ogden wrote:
> That'
That's a great idea. I did what you suggested and added the url to the
props file in the uri of the json. The properties file now shows up in the
sandbox. But when it goes to run spark-submit with "--properties-file
props.properties" it fails to find it:
Exception
If you are using chronos you can just put the url in the task json and
chronos will download it into your sandbox. Then just use spark-submit
--properties-file app.properties.
On Thu, 11 Jun 2015 15:52 Marcelo Vanzin wrote:
> That's not supported. You could use wget / curl to download
That's not supported. You could use wget / curl to download the file to a
temp location before running spark-submit, though.
On Thu, Jun 11, 2015 at 12:48 PM, Gary Ogden wrote:
> I have a properties file that is hosted at a url. I would like to be able
> to use the url in the --prop
I have a properties file that is hosted at a url. I would like to be able
to use the url in the --properties-file parameter when submitting a job to
mesos using spark-submit via chronos
I would rather do this than use a file on the local server.
This doesn't seem to work though when submi
But once I submit a similar query via Spark application through
'spark-submit', it does not see the tables and it seems it does
not pick hive-site.xml which is under conf directory in Spark's
home. I tried to use '--files' argument with spark-submit to pass
gives me a default sqlContext, and I can
> use that to access my Hive's tables with no problem.
>
> But once I submit a similar query via Spark application through
> 'spark-submit', it does not see the tables and it seems it does not pick
> hive-site.xml which is unde
i, in $SPARK_HOME/conf
directory, I tried to share its config with spark.
When I start spark-shell, it gives me a default sqlContext, and I can
use that to access my Hive's tables with no problem.
But once I submit a similar query via Spark application through
'spark-submit',
’m using standalone mode.
>>
>>
>>
>> Any ideas?
>>
>>
>>
>> Thanks
>>
>> Dong Lei
>>
>>
>>
>> *From:* Akhil Das [mailto:ak...@sigmoidanalytics.com]
>> *Sent:* Tuesday, June 9, 2015 4:46 PM
>>
>>
>>
: ClassNotDefException when using spark-submit with multiple jars
and files located on HDFS
I am not sure they work with HDFS pathes. You may want to look at the source
code. Alternatively you can create a "fat" jar containing all jars (let your
build tool set correctly METAINF). T
Thanks
>
> Dong Lei
>
>
>
> *From:* Akhil Das [mailto:ak...@sigmoidanalytics.com]
> *Sent:* Tuesday, June 9, 2015 4:46 PM
>
>
> *To:* Dong Lei
> *Cc:* user@spark.apache.org
> *Subject:* Re: ClassNotDefException when using spark-submit with multiple
> jars and file
ks
Dong Lei
From: Akhil Das
[mailto:ak...@sigmoidanalytics.com<mailto:ak...@sigmoidanalytics.com>]
Sent: Tuesday, June 9, 2015 3:24 PM
To: Dong Lei
Cc: user@spark.apache.org<mailto:user@spark.apache.org>
Subject: Re: ClassNotDefException when using spark-submit with multiple jars
a
config with spark.
When I start spark-shell, it gives me a default sqlContext, and I can use
that to access my Hive's tables with no problem.
But once I submit a similar query via Spark application through
'spark-submit', it does not see the tables and it seems it does not pick
hive
essage in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Can-a-Spark-App-run-with-spark-submit-write-pdf-files-to-HDFS-tp23233.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
>
HDFS.
Moreover, is there a specific need to use Spark in this case?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Can-a-Spark-App-run-with-spark-submit-write-pdf-files-to-HDFS-tp23233p23237.html
Sent from the Apache Spark User List mailing list archi
I would like to write pdf files using pdfbox to HDFS from my Spark
application. Can this be done?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Can-a-Spark-App-run-with-spark-submit-write-pdf-files-to-HDFS-tp23233.html
Sent from the Apache Spark User
I figured it out *in case anyone else has this problem in the future.
spark-submit --driver-class-path lib/postgresql-9.4-1201.jdbc4.jar
--packages com.databricks:spark-csv_2.10:1.0.3 path/to/my/script.py
What I found is that you MUST put the path to your script at the end of the
spark-submit
docs?
>
>
>
> Thanks
>
> Dong Lei
>
>
>
>
>
> *From:* Akhil Das [mailto:ak...@sigmoidanalytics.com]
> *Sent:* Tuesday, June 9, 2015 3:24 PM
> *To:* Dong Lei
> *Cc:* user@spark.apache.org
> *Subject:* Re: ClassNotDefException when using spark-submit with
3:24 PM
To: Dong Lei
Cc: user@spark.apache.org
Subject: Re: ClassNotDefException when using spark-submit with multiple jars
and files located on HDFS
Once you submits the application, you can check in the driver UI (running on
port 4040) Environment Tab to see whether those jars you added got
by putting the jar with the class in it on the top of your
classpath.
Thanks
Best Regards
On Tue, Jun 9, 2015 at 9:05 AM, Dong Lei wrote:
> Hi, spark-users:
>
>
>
> I’m using spark-submit to submit multiple jars and files(all in HDFS) to
> run a job, with the following comm
Hi, spark-users:
I'm using spark-submit to submit multiple jars and files(all in HDFS) to run a
job, with the following command:
Spark-submit
--class myClass
--master spark://localhost:7077/
--deploy-mode cluster
--jars hdfs://localhost/1.jar, hdfs://localhost/2.jar
--files
hi,
i want to run my spark app on a cluster,
i use cloudera live single node vm.
how i must build the job for the spark submit script?
and i must upload spark submit on hdfs?
best regards
paul
Hi!
I have a little problem... If I started my spark application as java app
(locally) it's work like a charm, but if I start in hadoop cluster (tried
spark-submit --master local[5] and --master yarn-client), but it's not
working. No error, no exception, periodically run the job b
Thanks Sandy- I was digging through the code in the deploy.yarn.Client and
literally found that property right before I saw your reply. I'm on 1.2.x
right now which doesn't have the property. I guess I need to update sooner
rather than later.
On Thu, May 28, 2015 at 3:56 PM, Sandy Ryza wrote:
>
Hi Corey,
As of this PR https://github.com/apache/spark/pull/5297/files, this can be
controlled with spark.yarn.submit.waitAppCompletion.
-Sandy
On Thu, May 28, 2015 at 11:48 AM, Corey Nolet wrote:
> I am submitting jobs to my yarn cluster via the yarn-cluster mode and I'm
> noticing the jvm t
I am submitting jobs to my yarn cluster via the yarn-cluster mode and I'm
noticing the jvm that fires up to allocate the resources, etc... is not
going away after the application master and executors have been allocated.
Instead, it just sits there printing 1 second status updates to the
console. I
Hello,
I am trying to use the default Spark cluster manager in a production
environment. I will be submitting jobs with spark-submit. I wonder if the
following is possible:
1. Get the Driver ID from spark-submit. We will use this ID to keep track
of the job and kill it if necessary.
2. Weather
If you are using Spark Standalone deployment, make sure you set the
WORKER_MEMROY over 20G, and you do have 20G physical memory.
Yong
> Date: Tue, 7 Apr 2015 20:58:42 -0700
> From: li...@adobe.com
> To: user@spark.apache.org
> Subject: EC2 spark-submit --executor-memory
>
&g
t; that installation is only one cluster?
> Please correct me, if this is not cause then why I am not able to run in
> cluster mode,
> spark submit command is -
> spark-submit --jars some dependent jars... --master yarn --class
> com.java.jobs.sparkAggregation mytest-1.0.0
Hi ,
I observed that we have installed only one cluster,
and submiting job as yarn-cluster then getting below error, so is this cause
that installation is only one cluster?
Please correct me, if this is not cause then why I am not able to run in
cluster mode,
spark submit command is -
spark-submit
uler-event-loop"
java.lang.OutOfMemoryError: Java heap space, but that's another issue.
./bin/spark-submit \
--class ... \
--executor-memory 20G \
/path/to/examples.jar
Thanks.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/EC2-spark-su
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Can-t-run-spark-submit-with-an-application-jar-on-a-Mesos-cluster-tp22277p22331.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
--
bly have your answer
why it is failing.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Can-t-run-spark-submit-with-an-application-jar-on-a-Mesos-cluster-tp22277p22319.html
Sent from the Apache Spark User List mailing list a
Client mode would not support HDFS jar extraction.
I tried this:
sudo -u hdfs spark-submit --class org.apache.spark.examples.SparkPi
--deploy-mode cluster --master yarn
hdfs:///user/spark/spark-examples-1.2.0-cdh5.3.2-hadoop2.5.0-cdh5.3.2.jar 10
And it worked.
--
View this message in context
x27;1'
I0329 20:39:26.241459 2524 slave.cpp:3237] Framework
20150322-040336-606645514-5050-2744-0037 seems to have exited. Ignoring
registration timeout for executor '4'
$ cat mesos-slave.WARNING | grep 20150322-040336-606645514-5050-2744-0037
W0329 20:35:50.453419 2526 slave.cpp:1548] C
Made it work by using yarn-cluster as master instead of local.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-not-working-when-application-jar-is-in-hdfs-tp21840p22281.html
Sent from the Apache Spark User List mailing list archive at
works/20150329-232522-84118794-5050-18181-/executors/5/runs/latest/stderr
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Can-t-run-spark-submit-with-an-application-jar-on-a-Mesos-cluster-tp22277p22280.html
Sent from the Apache Spark User List mailing list ar
question where I try to submit the application by
> referring to it on HDFS is very similar to the recent question
>
> Spark-submit not working when application jar is in hdfs
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-not-working-when-application-j
The latter part of this question where I try to submit the application by
referring to it on HDFS is very similar to the recent question
Spark-submit not working when application jar is in hdfs
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-not-working-when-application-jar-is-in
ample.
./spark-submit --class org.apache.spark.examples.SparkPi --master
mesos://10.173.40.36:5050
~/spark-1.3.0-bin-hadoop2.4/lib/spark-examples-1.3.0-hadoop2.4.0.jar 100
And the output:
jclouds@development-5159-d9:~/learning-spark$
~/spark-1.3.0-bin-hadoop2.4/bin/spark
wrote:
> I'm trying to run a spark application using bin/spark-submit. When I
> reference my application jar inside my local filesystem, it works. However,
> when I copied my application jar to a directory in hdfs, i get the
> following
> exception:
>
> Warning: Skip remote
Hi, did you resolve this issue or just work around it be keeping your
application jar local? Running into the same issue with 1.3.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-not-working-when-application-jar-is-in-hdfs-tp21840p22272.html
Sandy Ryza writes:
> Creating a SparkContext and setting master as yarn-cluster unfortunately
> will not work.
>
> SPARK-4924 added APIs for doing this in Spark, but won't be included until
> 1.4.
>
> -Sandy
>
Did you look into something like [1]? With that you can make rest API
call from your j
Creating a SparkContext and setting master as yarn-cluster unfortunately
will not work.
SPARK-4924 added APIs for doing this in Spark, but won't be included until
1.4.
-Sandy
On Tue, Mar 17, 2015 at 3:19 AM, Akhil Das
wrote:
> Create SparkContext set master as yarn-cluster then run it as a sta
ark UI, I can see:
>>>
>>> spark.driver.extraLibrary c:\openblas
>>>
>>>
>>> Thanks,
>>> David
>>>
>>>
>>> On Sun, Mar 22, 2015 at 11:45 AM Ted Yu wrote:
>>>
>>>> Can you try the --driver-library-path
aLibrary c:\openblas
>>
>>
>> Thanks,
>> David
>>
>>
>> On Sun, Mar 22, 2015 at 11:45 AM Ted Yu wrote:
>>
>>> Can you try the --driver-library-path option ?
>>>
>>> spark-submit --driver-library-path /opt/hadoop/lib/nativ
> From the Spark UI, I can see:
>
> spark.driver.extraLibrary c:\openblas
>
>
> Thanks,
> David
>
>
> On Sun, Mar 22, 2015 at 11:45 AM Ted Yu wrote:
>
>> Can you try the --driver-library-path option ?
>>
>> spark-submit --driver-library-path /opt/hadoop/
implementation from:
com.github.fommil.netlib.NativeRefBLAS
>From the Spark UI, I can see:
spark.driver.extraLibrary c:\openblas
Thanks,
David
On Sun, Mar 22, 2015 at 11:45 AM Ted Yu wrote:
> Can you try the --driver-library-path option ?
>
> spark-submit --driver-library-path /op
Can you try the --driver-library-path option ?
spark-submit --driver-library-path /opt/hadoop/lib/native ...
Cheers
On Sat, Mar 21, 2015 at 4:58 PM, Xi Shen wrote:
> Hi,
>
> I use the *OpenBLAS* DLL, and have configured my application to work in
> IDE. When I start my Spark appl
Hi,
I use the *OpenBLAS* DLL, and have configured my application to work in
IDE. When I start my Spark application from IntelliJ IDE, I can see in the
log that the native lib is loaded successfully.
But if I use *spark-submit* to start my application, the native lib still
cannot be load. I saw
.
>
>
>
> On Thu, Mar 19, 2015 at 8:03 PM, Davies Liu wrote:
>>
>> You could submit additional Python source via --py-files , for example:
>>
>> $ bin/spark-submit --py-files work.py main.py
>>
>> On Tue, Mar 17, 2015 at 3:29 AM, poiuytrez
>
Hi,
I am curious, when I start a spark program in local mode, which parameter
will be used to decide the jvm memory size for executor?
In theory should be:
--executor-memory 20G
But I remember local mode will only start spark executor in the same process
of driver, then should be:
--driv
I tried your program in yarn-client mode and it worked with no
exception. This is the command I used:
spark-submit --master yarn-client --py-files work.py main.py
(Spark 1.2.1)
On 20.3.2015. 9:47, Guillaume Charhon wrote:
Hi Davies,
I am already using --py-files. The system does use the
501 - 600 of 840 matches
Mail list logo