I modified yarn-site.xml yarn.nodemanager.vmem-check-enabled to false
and it works for yarn-client and spark-shell
On Fri, Oct 21, 2016 at 10:59 AM, Li Li <fancye...@gmail.com> wrote:
> I found a warn in nodemanager log. is the virtual memory exceed? how
> should I config y
8:12:14 ERROR cluster.YarnClientSchedulerBackend: Yarn
>> application has already exited with state FINISHED!
>>
>> From this, I think it is spark has difficult communicating with YARN. You
>> should check your Spark log.
>>
>>
>> On Fri, Oct 21, 2016 at 8
: Final app status:
>
> This container may be killed by yarn NodeManager or other processes, you'd
> better check yarn log to dig out more details.
>
> Thanks
> Saisai
>
> On Thu, Oct 20, 2016 at 6:51 PM, Li Li <fancye...@gmail.com> wrote:
>>
>> I am setting up
AL TERM
>> 16/10/20 18:12:04 INFO yarn.ApplicationMaster: Final app status:
>>
>> This container may be killed by yarn NodeManager or other processes, you'd
>> better check yarn log to dig out more details.
>>
>> Thanks
>> Saisai
>>
>> On Thu, O
s, you'd
>> better check yarn log to dig out more details.
>>
>> Thanks
>> Saisai
>>
>> On Thu, Oct 20, 2016 at 6:51 PM, Li Li <fancye...@gmail.com> wrote:
>>>
>>> I am setting up a small yarn/spark cluster. hadoop/yarn version is
>>> 2.
I am setting up a small yarn/spark cluster. hadoop/yarn version is
2.7.3 and I can run wordcount map-reduce correctly in yarn.
And I am using spark-2.0.1-bin-hadoop2.7 using command:
~/spark-2.0.1-bin-hadoop2.7$ ./bin/spark-submit --class
org.apache.spark.examples.SparkPi --master yarn-client
anyone could help?
On Wed, Dec 23, 2015 at 1:40 PM, Li Li <fancye...@gmail.com> wrote:
> I ran my lda example in a yarn 2.6.2 cluster with spark 1.5.2.
> it throws exception in line: Matrix topics = ldaModel.topicsMatrix();
> But in yarn job history ui, it's successfu
I ran my lda example in a yarn 2.6.2 cluster with spark 1.5.2.
it throws exception in line: Matrix topics = ldaModel.topicsMatrix();
But in yarn job history ui, it's successful. What's wrong with it?
I submit job with
.bin/spark-submit --class Myclass \
--master yarn-client \