I have opened the pom file under cd ~/spark/sql/catalyst$ vi pom.xml and
increases from "-Xss4m" to "-Xss4g" but still no luck..It is the same
stackoverflow error.
On Fri, Dec 16, 2022 at 9:42 PM Sean Owen wrote:
> OK that's good. Hm, I seem to recall the build needs more mem in Java 11
> and/
I use java 17 to build this.
Are there any reasons why you have to build spark yourself? Can't you start
from spark 3.3.1 tar file and build a docker image from there?
fre. 16. des. 2022 kl. 18:26 skrev Gnana Kumar :
> I have opened the pom file under cd ~/spark/sql/catalyst$ vi pom.xml and
>
OK that's good. Hm, I seem to recall the build needs more mem in Java 11
and/or some envs. As a quick check, try replacing all "-Xss4m" with
"-Xss16m" or something larger, in the project build files. Just search and
replace.
On Fri, Dec 16, 2022 at 9:53 AM Gnana Kumar
wrote:
> I have been follow
You need to increase the stack size during compilation. The included mvn
wrapper in build does this. Are you using it?
On Fri, Dec 16, 2022 at 9:13 AM Gnana Kumar
wrote:
> This is my latest error and fails to build SPARK CATALYST
>
> Exception in thread "main" java.lang.StackOverflowError
>
Hi There,
I have cloned spark 3.3.2 snapshot source code where Volcano feature is
present and then I built the binary distribution for the same.
However when I tried to use this spark 3.3.2 SNAPSHOT dist with Volcano but
I have got class not found exception.
--conf
spark.kubernetes.driver.pod.fe
> Sincerely yours,
>
>
> Raymond
>
> On Sun, Jun 17, 2018 at 2:36 PM, Subhash Sriram
> wrote:
> Hi Raymond,
>
> If you set your master to local[*] instead of yarn-client, it should run on
> your local machine.
>
> Thanks,
> Subhash
>
> Sen
e
>
> On Jun 17, 2018, at 2:32 PM, Raymond Xie wrote:
>
> Hello,
>
> I am wondering how can I run spark job in my environment which is a single
> Ubuntu host with no hadoop installed? if I run my job like below, I will
> end up with infinite loop at the end. Thank you very
Hi Raymond,
If you set your master to local[*] instead of yarn-client, it should run on
your local machine.
Thanks,
Subhash
Sent from my iPhone
> On Jun 17, 2018, at 2:32 PM, Raymond Xie wrote:
>
> Hello,
>
> I am wondering how can I run spark job in my environment wh
Hello,
I am wondering how can I run spark job in my environment which is a single
Ubuntu host with no hadoop installed? if I run my job like below, I will
end up with infinite loop at the end. Thank you very much.
rxie@ubuntu:~/data$ spark-submit --class retail_db.GetRevenuePerOrder
--conf
Thanks!
I finally make this work, except parameter LinuxContainerExecutor and
cache directory permissions, the following parameter also need to be
updated to specified user.
yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user
Thanks.
2018-01-22 22:44 GMT+08:00 Margusja :
> Hi
>
Hi
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor requires user
in each node and right permissions set in necessary directories.
Br
Margus
> On 22 Jan 2018, at 13:41, sd wang wrote:
>
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor
Configure Kerberos
> On 22. Jan 2018, at 08:28, sd wang wrote:
>
> Hi Advisers,
> When submit spark job in yarn cluster mode, the job will be executed by
> "yarn" user. Any parameters can change the user? I tried setting
> HADOOP_USER_NAME but it did not work. I'm using spark 2.2.
> Thanks fo
Hi Margus,
Appreciate your help!
Seems this parameter is related to CGroups functions.
I am using CDH without kerberos, I set the parameter:
yarn.nodemanager.container-executor.class=org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor
Then run spark job again, hit the problem as
Hi
One way to get it is use YARN configuration parameter -
yarn.nodemanager.container-executor.class.
By default it is
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor - gives you
user who executes script.
Br
Hi Advisers,
When submit spark job in yarn cluster mode, the job will be executed by
"yarn" user. Any parameters can change the user? I tried
setting HADOOP_USER_NAME but it did not work. I'm using spark 2.2.
Thanks for any help!
am developing a workflow system based on Oozie, but
>> it only supports java and mapreduce now, so I want to run spark job as in
>> local mode by the workflow system first, then extend the workflow system to
>> run spark job on Yarn.
>>
>
> You can run spark code on o
On 29 Mar 2016, at 14:30, Fei Hu mailto:hufe...@gmail.com>>
wrote:
Hi Jeff,
Thanks for your info! I am developing a workflow system based on Oozie, but it
only supports java and mapreduce now, so I want to run spark job as in local
mode by the workflow system first, then extend the wo
Hi Jeff,
Thanks for your info! I am developing a workflow system based on Oozie, but it
only supports java and mapreduce now, so I want to run spark job as in local
mode by the workflow system first, then extend the workflow system to run spark
job on Yarn.
Best wishes,
Fei
> On Mar
Yes you can. But this is actually what spark-submit does for you. Actually
spark-submit do more than that.
You can refer here
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala
What's your purpose for using "java -cp", for local development, I
Hi,
I am wondering how to run the spark job by java command, such as: java -cp
spark.jar mainclass. When running/debugging the spark program in IntelliJ IDEA,
it uses java command to run spark main class, so I think it should be able to
run the spark job by java command besides the spark-submit
Hi,
I have non secure Hadoop 2.7.2 cluster on EC2 having Spark 1.5.2
When I am submitting my spark scala script through shell script using Oozie
workflow.
I am submitting job as hdfs user but It is running as user = "yarn" so all
the output should get store under user/yarn directory only .
When I
uot; error. I then tried taking out the 'sc = SparkContext()' line from
the .py file, but then it couldn't access sc.
How can I %run another Python Spark file within iPython Notebook?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Run-Spark-jo
directly
>>> from
>>> java code within my jms listener and/or servlet container.
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/What-is-be
job directly
>> from
>> java code within my jms listener and/or servlet container.
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/What-is-best-way-to-run-spark-job-in-yarn-cluster-mode-from-java
o execute the job directly
> from
> java code within my jms listener and/or servlet container.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/What-is-best-way-to-run-spark-job-in-yarn-cluster-mode-from-java-program-servlet-
.nabble.com/What-is-best-way-to-run-spark-job-in-yarn-cluster-mode-from-java-program-servlet-container-and-NOT-u-tp21817p22086.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user
EMR. So approach should work there also.
Thanks in advance
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/What-is-best-way-to-run-spark-job-in-yarn-cluster-mode-from-java-program-servlet-container-and-NOT-u-tp21817.html
Sent from t
SigmoidAnalytics, Bangalore
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Not-able-to-run-spark-job-from-code-on-EC2-with-spark-1-2-0-tp21217.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
(sparkContext,
>>> this.getClass.getClassLoader) discussed in
>>> http://mail-archives.apache.org/mod_mbox/spark-user/201412.mbox/%3CCAJOb8buD1B6tUtOfG8_Ok7F95C3=r-zqgffoqsqbjdxd427...@mail.gmail.com%3E
>>>
>>> Thanks,
>>> Aniket
>>>
>>&
sLoader) discussed in
>> http://mail-archives.apache.org/mod_mbox/spark-user/201412.mbox/%3CCAJOb8buD1B6tUtOfG8_Ok7F95C3=r-zqgffoqsqbjdxd427...@mail.gmail.com%3E
>>
>> Thanks,
>> Aniket
>>
>>
>> On Mon Dec 15 2014 at 07:43:24 Tomoya Igarashi <
>> tomoy
ed in
> http://mail-archives.apache.org/mod_mbox/spark-user/201412.mbox/%3CCAJOb8buD1B6tUtOfG8_Ok7F95C3=r-zqgffoqsqbjdxd427...@mail.gmail.com%3E
>
> Thanks,
> Aniket
>
>
> On Mon Dec 15 2014 at 07:43:24 Tomoya Igarashi <
> tomoya.igarashi.0...@gmail.com> wrote:
>
>> H
<
tomoya.igarashi.0...@gmail.com> wrote:
> Hi all,
>
> I am trying to run Spark job on Playframework + Spark Master/Worker in one
> Mac.
> When job ran, I encountered java.lang.ClassNotFoundException.
> Would you teach me how to solve it?
>
> Here is my code i
Hi all,
I am trying to run Spark job on Playframework + Spark Master/Worker in one
Mac.
When job ran, I encountered java.lang.ClassNotFoundException.
Would you teach me how to solve it?
Here is my code in Github.
https://github.com/TomoyaIgarashi/spark_cluster_sample
* Envrionments:
Mac 10.9.5
Hi all,
I am trying to run Spark job on Playframework + Spark Master/Worker in one
Mac.
When job ran, I encountered java.lang.ClassNotFoundException.
Would you teach me how to solve it?
Here is my code in Github.
https://github.com/TomoyaIgarashi/spark_cluster_sample
* Envrionments:
Mac 10.9.5
We currently choose not to run jobs on yarn, so I stop trying this.
Anyway thanks for you guys' suggestions.
At least, your solutions may help people who must run their jobs on yarn : )
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-run-spark-j
nabble.com/how-to-run-spark-job-on-yarn-with-jni-lib-tp15146p15351.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-ma
is message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-run-spark-job-on-yarn-with-jni-lib-tp15146p15195.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
>
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-run-spark-job-on-yarn-with-jni-lib-tp15146p15195.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@
ne who knows how to solve this problem?
> Thanks.
>
> Ziv Huang
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-run-spark-job-on-yarn-with-jni-lib-tp15146.html
> Sen
ge in context:
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-run-spark-job-on-yarn-with-jni-lib-tp15146.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-uns
40 matches
Mail list logo