[
https://issues.apache.org/jira/browse/SPARK-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen updated SPARK-11555:
------------------------------
Priority: Critical (was: Minor)
Oops, somehow my priority change got flicked the wrong way. I think this one is
fairly important!
> spark on yarn spark-class --num-workers doesn't work
> ----------------------------------------------------
>
> Key: SPARK-11555
> URL: https://issues.apache.org/jira/browse/SPARK-11555
> Project: Spark
> Issue Type: Bug
> Components: YARN
> Affects Versions: 1.5.2
> Reporter: Thomas Graves
> Priority: Critical
>
> Using the old spark-class and --num-workers interface, --num-workers
> parameter is ignored and always uses default number of executors (2).
> bin/spark-class org.apache.spark.deploy.yarn.Client --jar
> lib/spark-examples-1.5.2.0-hadoop2.6.0.16.1506060127.jar --class
> org.apache.spark.examples.SparkPi --num-workers 4 --worker-memory 2g
> --master-memory 1g --worker-cores 1 --queue default
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]