OK, yarn.scheduler.maximum-allocation-mb is 16384.
I have ran it again, the command to run it is:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
yarn-cluster -
-driver-memory 4g --executor-memory 8g lib/spark-examples*.jar 200
>
>
> 15/11/24 16:15:56 INFO
If yarn has only 50 cores then it can support max 49 executors plus 1
driver application master.
Regards
Sab
On 24-Nov-2015 1:58 pm, "谢廷稳" wrote:
> OK, yarn.scheduler.maximum-allocation-mb is 16384.
>
> I have ran it again, the command to run it is:
> ./bin/spark-submit
Did you set this configuration "spark.dynamicAllocation.initialExecutors" ?
You can set spark.dynamicAllocation.initialExecutors 50 to take try again.
I guess you might be hitting this issue since you're running 1.5.0,
https://issues.apache.org/jira/browse/SPARK-9092. But it still cannot
explain
@Sab Thank you for your reply, but the cluster has 6 nodes which contain
300 cores and Spark application did not request resource from YARN.
@SaiSai I have ran it successful with "
spark.dynamicAllocation.initialExecutors" equals 50, but in
The document is right. Because of a bug introduce in
https://issues.apache.org/jira/browse/SPARK-9092 which makes this
configuration fail to work.
It is fixed in https://issues.apache.org/jira/browse/SPARK-10790, you could
change to newer version of Spark.
On Tue, Nov 24, 2015 at 5:12 PM, 谢廷稳
Thank you very much, after change to newer version, it did work well!
2015-11-24 17:15 GMT+08:00 Saisai Shao :
> The document is right. Because of a bug introduce in
> https://issues.apache.org/jira/browse/SPARK-9092 which makes this
> configuration fail to work.
>
> It
Hi Tingwen,
Would you minding sharing your changes in
ExecutorAllocationManager#addExecutors().
>From my understanding and test, dynamic allocation can be worked when you
set the min to max number of executors to the same number.
Please check your Spark and Yarn log to make sure the executors
I don't think it is a bug, maybe something wrong with your Spark / Yarn
configurations.
On Tue, Nov 24, 2015 at 12:13 PM, 谢廷稳 wrote:
> OK,the YARN cluster was used by myself,it have 6 node witch can run over
> 100 executor, and the YARN RM logs showed that the Spark
Hi Saisai,
Would you mind giving me some tips about this problem? After check YARN RM
logs, I think Spark application didn't request resources from it, So, I
guess this problem is none of YARN's business. and the spark conf of my
cluster will be list in the following:
can you show your parameter values in your env ?
yarn.nodemanager.resource.cpu-vcores
yarn.nodemanager.resource.memory-mb
cherrywayb...@gmail.com
From: 谢廷稳
Date: 2015-11-24 12:13
To: Saisai Shao
CC: spark users
Subject: Re: A Problem About Running Spark 1.5 on YARN with Dynamic
Hi Saisai,
I'm sorry for did not describe it clearly,YARN debug log said I have 50
executors,but ResourceManager showed that I only have 1 container for the
AppMaster.
I have checked YARN RM logs,after AppMaster changed state from ACCEPTED to
RUNNING,it did not have log about this job any
Hi SaiSai,
I have changed "if (numExecutorsTarget >= maxNumExecutors)" to "if
(numExecutorsTarget > maxNumExecutors)" of the first line in the
ExecutorAllocationManager#addExecutors() and it rans well.
In my opinion,when I was set minExecutors equals maxExecutors,when the
first time to add
I think this behavior is expected, since you already have 50 executors
launched, so no need to acquire additional executors. You change is not
solid, it is just hiding the log.
Again I think you should check the logs of Yarn and Spark to see if
executors are started correctly. Why resource is
13 matches
Mail list logo