Re: getting error when submit spark with master as yarn

2015-02-09 Thread Al M
Open up 'yarn-site.xml' in your hadoop configuration.  You want to create
configuration for yarn.nodemanager.resource.memory-mb and
yarn.scheduler.maximum-allocation-mb.  Have a look here for details on how
they work:
https://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-common/yarn-default.xml



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/getting-error-when-submit-spark-with-master-as-yarn-tp21542p21547.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



getting error when submit spark with master as yarn

2015-02-07 Thread sachin Singh
Hi,
when I am trying to execute my program as 
spark-submit --master yarn --class com.mytestpack.analysis.SparkTest
sparktest-1.jar

I am getting error bellow error-
java.lang.IllegalArgumentException: Required executor memory (1024+384 MB)
is above the max threshold (1024 MB) of this cluster!
at
org.apache.spark.deploy.yarn.ClientBase$class.verifyClusterResources(ClientBase.scala:71)
at
org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:35)
at 
org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:77)
at
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
at
org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:140)
at org.apache.spark.SparkContext.init(SparkContext.scala:335)
at
org.apache.spark.api.java.JavaSparkContext.init(JavaSparkContext.scala:61)

I am new in Hadoop environment,
Please help how/where need to set memory or any configuration ,thanks in
advance,




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/getting-error-when-submit-spark-with-master-as-yarn-tp21542.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: getting error when submit spark with master as yarn

2015-02-07 Thread Sandy Ryza
Hi Sachin,

In your YARN configuration, either yarn.nodemanager.resource.memory-mb is
1024 on your nodes or yarn.scheduler.maximum-allocation-mb is set to 1024.
If you have more than 1024 MB on each node, you should bump these
properties.  Otherwise, you should request fewer resources by setting
--executor-memory and --driver-memory when you launch your Spark job.

-Sandy

On Sat, Feb 7, 2015 at 10:04 AM, sachin Singh sachin.sha...@gmail.com
wrote:

 Hi,
 when I am trying to execute my program as
 spark-submit --master yarn --class com.mytestpack.analysis.SparkTest
 sparktest-1.jar

 I am getting error bellow error-
 java.lang.IllegalArgumentException: Required executor memory (1024+384 MB)
 is above the max threshold (1024 MB) of this cluster!
 at

 org.apache.spark.deploy.yarn.ClientBase$class.verifyClusterResources(ClientBase.scala:71)
 at
 org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:35)
 at
 org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:77)
 at

 org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
 at

 org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:140)
 at org.apache.spark.SparkContext.init(SparkContext.scala:335)
 at

 org.apache.spark.api.java.JavaSparkContext.init(JavaSparkContext.scala:61)

 I am new in Hadoop environment,
 Please help how/where need to set memory or any configuration ,thanks in
 advance,




 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/getting-error-when-submit-spark-with-master-as-yarn-tp21542.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org