Hi all,
I ran spark 1.4 with Dynamic Allocation enabled, when it was running, I can
see Executor's information, such as ID, Address, Shuffle Read/Write, logs
etc.But once executor was removed, the web page not display that executor
any more, finally, the spark app's information in Spark HistoryServ
fixed in https://issues.apache.org/jira/browse/SPARK-10790, you
> could change to newer version of Spark.
>
> On Tue, Nov 24, 2015 at 5:12 PM, 谢廷稳 wrote:
>
>> @Sab Thank you for your reply, but the cluster has 6 nodes which contain
>> 300 cores and Spark application did not requ
n why 49 executors can be worked.
>
> On Tue, Nov 24, 2015 at 4:42 PM, Sabarish Sasidharan <
> sabarish.sasidha...@manthan.com> wrote:
>
>> If yarn has only 50 cores then it can support max 49 executors plus 1
>> driver application master.
>>
>> Regards
>
/24 16:18:00 WARN cluster.YarnClusterScheduler: Initial job has not
> accepted any resources; check your cluster UI to ensure that workers are
> registered and have sufficient resources
>
>
2015-11-24 15:14 GMT+08:00 Saisai Shao :
> What about this configure in Yarn "yarn.sched
11-24 14:30 GMT+08:00 cherrywayb...@gmail.com
:
> can you show your parameter values in your env ?
> yarn.nodemanager.resource.cpu-vcores
> yarn.nodemanager.resource.memory-mb
>
> --
> cherrywayb...@gmail.com
>
>
> *From:* 谢廷稳
with your Spark / Yarn
> configurations.
>
> On Tue, Nov 24, 2015 at 12:13 PM, 谢廷稳 wrote:
>
>> OK,the YARN cluster was used by myself,it have 6 node witch can run over
>> 100 executor, and the YARN RM logs showed that the Spark application did
>> not requested resource fro
id I'm OK with min and max executors to the same number.
>
> On Tue, Nov 24, 2015 at 11:54 AM, 谢廷稳 wrote:
>
>> Hi Saisai,
>> I'm sorry for did not describe it clearly,YARN debug log said I have 50
>> executors,but ResourceManager showed that I only have 1 conta
t; On Tue, Nov 24, 2015 at 10:48 AM, 谢廷稳 wrote:
>
>> Hi SaiSai,
>> I have changed "if (numExecutorsTarget >= maxNumExecutors)" to "if
>> (numExecutorsTarget > maxNumExecutors)" of the first line in the
>> ExecutorAllocationManager#addExecutors
re the executors are
> correctly started, the warning log means currently resource is not enough
> to submit tasks.
>
> Thanks
> Saisai
>
>
> On Mon, Nov 23, 2015 at 8:41 PM, 谢廷稳 wrote:
>
>> Hi all,
>> I ran a SparkPi on YARN with Dynamic Allocation enabled and
Hi all,
I ran a SparkPi on YARN with Dynamic Allocation enabled and set
spark.dynamicAllocation.maxExecutors equals
spark.dynamicAllocation.minExecutors,then I submit an application using:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
yarn-cluster --driver-memory 4g --exec
10 matches
Mail list logo