Re: How to monitor YARN application memory per container?

2017-06-13 Thread Shmuel Blitz
Hi, Thanks for your response. The /metrics API returns a blank page on our RM. The /jmx API has some metrics, but these are the same metrics we are already loading into data-dog. It's not good enough, because it doesn't break down the memory use by container. I need the by-container breakdown b

Mapreduce application failed on Distributed scheduler on YARN.

2017-06-13 Thread Jasson Chenwei
hi all I have set up a distributed scheduler using a new feature in Hadoop-3.0. My Hadoop version is hadoop-3.0.0-appha3. I have enabled the opportunistic container and distributed scheduler in yarn-site.xml following the guide. But wordcount application master failed to launch as follow: *2017-0

Re: How to monitor YARN application memory per container?

2017-06-13 Thread Sidharth Kumar
Hi, I guess you can get it from http://:/jmx or /metrics Regards Sidharth LinkedIn: www.linkedin.com/in/sidharthkumar2792 On 13-Jun-2017 6:26 PM, "Shmuel Blitz" wrote: > (This question has also been published on StackOveflow > ) > > I am looking for

How to monitor YARN application memory per container?

2017-06-13 Thread Shmuel Blitz
(This question has also been published on StackOveflow ) I am looking for a way to monitor memory usage of YARN containers over time. Specifically - given a YARN application-id, how can you get a graph, showing the memory usage of each of its container

Install spark on a hadoop cluster

2017-06-13 Thread Bhushan Pathak
Hello, I have a 3-node hadoop cluster - one master & 2 slaves. I want to integrate spark with this hadoop setup so that spark uses yarn for job scheduling & execution. Hadoop version : 2.7.3 Spark version : 2.1.0 I have read various documentation & blog posts & my understanding so far is that -

Re: Hadoop Application Report from WebUI

2017-06-13 Thread Hilmi Egemen Ciritoğlu
Hi guys, Thanks a lot for your reply. I have doubts about second number because it's always fixed 134217728(128 MB). According to your answer, when I inspect all mappers I can see unprocessed blocks But in the meantime I know all data has been processed from the result I got. Are you really sure