think the YARN resource is sufficient. In the previous letter I have
>> said that I think Spark application didn't request resources from YARN.
>>
>> Thanks
>>
>> 2015-11-24 14:30 GMT+08:00 cherrywayb...@gmail.com <
>> cherrywayb...@gmail.com>:
&
gt;> will be log like:
>>
>> 15/10/14 17:35:37 INFO yarn.YarnAllocator: Will request 2 executor
>> containers, each with 1 cores and 1408 MB memory including 384 MB overhead
>> 15/10/14 17:35:37 INFO yarn.YarnAllocator: Container request (host: Any,
>> capability: <m
that workers are
>>> registered and have sufficient resources
>>>
>>> 15/11/24 16:18:00 WARN cluster.YarnClusterScheduler: Initial job has not
>>> accepted any resources; check your cluster UI to ensure that workers are
>>> registered and have suffic
luster UI to ensure that workers are
>>>> registered and have sufficient resources
>>>>
>>>> 15/11/24 16:17:30 WARN cluster.YarnClusterScheduler: Initial job has not
>>>> accepted any resources; check your cluster UI to ensure that workers are
>
t;>>>> 15/11/24 16:16:45 WARN cluster.YarnClusterScheduler: Initial job has not
>>>>> accepted any resources; check your cluster UI to ensure that workers are
>>>>> registered and have sufficient resources
>>>>>
>>>>> 15/11/24
rk.SparkContext: Created broadcast 0 from
>>>>>> broadcast at DAGScheduler.scala:861
>>>>>>
>>>>>> 15/11/24 16:16:30 INFO scheduler.DAGScheduler: Submitting 200 missing
>>>>>> tasks from ResultStage 0 (MapPartitionsRDD
can you show your parameter values in your env ?
yarn.nodemanager.resource.cpu-vcores
yarn.nodemanager.resource.memory-mb
cherrywayb...@gmail.com
From: 谢廷稳
Date: 2015-11-24 12:13
To: Saisai Shao
CC: spark users
Subject: Re: A Problem About Running Spark 1.5 on YARN with Dynamic