Thank you for your help and advice. After I added the log4j conf to expose
more details, I found that Spark had sent the removing request while some
containers did not receive the  SIGTERM signal.

Thanks.

2016-03-10 10:52 GMT+08:00 Saisai Shao <sai.sai.s...@gmail.com>:

> Still I think this information is not enough to explain the reason.
>
> 1. Does your yarn cluster has enough resources to start all 10 executors?
> 2. Would you please try latest version, 1.6.0 or master branch to see if
> this is a bug and already fixed.
> 3. you could add
> "log4j.logger.org.apache.spark.ExecutorAllocationManager=DEBUG" to log4j
> conf to expose more details, then maybe you could dig out some clues.
>
>
> Thanks
> Saisai
>
> On Thu, Mar 10, 2016 at 10:18 AM, Jy Chen <chen.wah...@gmail.com> wrote:
>
>> Sorry,the last configuration is also --conf
>> spark.dynamicAllocation.cachedExecutorIdleTimeout=60s, "--conf" was lost
>> when I copied it to mail.
>>
>> ---------- Forwarded message ----------
>> From: Jy Chen <chen.wah...@gmail.com>
>> Date: 2016-03-10 10:09 GMT+08:00
>> Subject: Re: Dynamic allocation doesn't work on YARN
>> To: Saisai Shao <sai.sai.s...@gmail.com>, user@spark.apache.org
>>
>>
>> Hi,
>> My Spark version is 1.5.1 with Hadoop 2.5.0-cdh5.2.0. These are my
>> configurations of dynamic allocation:
>> --master yarn-client --conf spark.dynamicAllocation.enabled=true --conf
>> spark.shuffle.service.enabled=true --conf
>> spark.dynamicAllocation.minExecutors=0 --conf
>> spark.dynamicAllocation.initialExecutors=10
>>  --conf spark.dynamicAllocation.executorIdleTimeout=60s
>> spark.dynamicAllocation.cachedExecutorIdleTimeout=60s
>>
>> At first,it will remove 2 executors and then no more executors will be
>> removed.
>>
>> Thanks
>>
>> 2016-03-09 17:24 GMT+08:00 Saisai Shao <sai.sai.s...@gmail.com>:
>>
>>> Would you please send out the configurations of dynamic allocation so we
>>> could know better.
>>>
>>> On Wed, Mar 9, 2016 at 4:29 PM, Jy Chen <chen.wah...@gmail.com> wrote:
>>>
>>>> Hello everyone:
>>>>
>>>> I'm trying the dynamic allocation in Spark on YARN. I have followed
>>>> configuration steps and started the shuffle service.
>>>>
>>>> Now it can request executors when the workload is heavy but it cannot
>>>> remove executors. I try to open the spark shell and don’t run any command,
>>>> no executor is removed after spark.dynamicAllocation.executorIdleTimeout
>>>> interval.
>>>> Am I missing something?
>>>>
>>>> Thanks.
>>>>
>>>>
>>>
>>
>>
>

Reply via email to