I run the yarn log command and got the following:
A set of yarnAllocator warnings 'expected to find requests, but found none.'
Then an error:
Akka. ErrorMonitor: associationError ...
But then I still get final app status: Succeeded, exit code 0
What does these errors mean?

On Wed, 16 Dec 2015 at 08:27 Eran Witkon <eranwit...@gmail.com> wrote:

> But what if I don't have more memory?
> On Wed, 16 Dec 2015 at 08:13 Zhan Zhang <zzh...@hortonworks.com> wrote:
>
>> There are two cases here. If the container is killed by yarn, you can
>> increase jvm overhead. Otherwise, you have to increase the executor-memory
>> if there is no memory leak happening.
>>
>> Thanks.
>>
>> Zhan Zhang
>>
>> On Dec 15, 2015, at 9:58 PM, Eran Witkon <eranwit...@gmail.com> wrote:
>>
>> If the problem is containers trying to use more memory then they allowed,
>> how do I limit them? I all ready have executor-memory 5G
>> Eran
>> On Tue, 15 Dec 2015 at 23:10 Zhan Zhang <zzh...@hortonworks.com> wrote:
>>
>>> You should be able to get the logs from yarn by “yarn logs
>>> -applicationId xxx”, where you can possible find the cause.
>>>
>>> Thanks.
>>>
>>> Zhan Zhang
>>>
>>> On Dec 15, 2015, at 11:50 AM, Eran Witkon <eranwit...@gmail.com> wrote:
>>>
>>> > When running
>>> > val data = sc.wholeTextFile("someDir/*") data.count()
>>> >
>>> > I get numerous warning from yarn till I get aka association exception.
>>> > Can someone explain what happen when spark loads this rdd and can't
>>> fit it all in memory?
>>> > Based on the exception it looks like the server is disconnecting from
>>> yarn and failing... Any idea why? The code is simple but still failing...
>>> > Eran
>>>
>>>
>>

Reply via email to