RE: Hadoop source code modification.

2016-11-15 Thread Mallanagouda Patil
Hi, Some time back I have written an article on how to setup development environment for Hadoop and also how to debug. http://www.codeproject.com/Articles/1067129/Debugging-Hadoop-HDFS-using-IntelliJ-IDEA-on-Linux You can reach me for any issues. Thanks Mallan On Nov 15, 2016 5:49 PM, "Brahma

RE: Hadoop source code modification.

2016-11-15 Thread Brahma Reddy Battula
Following links might be useful fro you. https://wiki.apache.org/hadoop/EclipseEnvironment http://blog.cloudera.com/blog/2013/05/how-to-configure-eclipse-for-hadoop-contributions/ https://www.quora.com/What-are-the-best-ways-to-learn-about-Hadoop-source Regards Brahma Reddy Battula From:

Re: HDFS - Corrupt replicas preventing decommissioning?

2016-11-15 Thread Hariharan
Thanks Brahma. That certainly cleared up a lot of doubts - the file did indeed show up in *fsck -openforwrite *and deleting it made the node move to "decommissioned" state. So the recommendation here is to wait for all files having blocks on the node to be closed before adding it to the excludes

RE: Hadoop source code modification.

2016-11-15 Thread Brahma Reddy Battula
(Keeping user-mailing list in loop.) You can compile corresponding module which you modified. Please refer "Where to run Maven from?" from the following. https://github.com/apache/hadoop/blob/trunk/BUILDING.txt Regards Brahma Reddy Battula -Original Message- From: Madhvaraj Shetty

RE: HDFS - Corrupt replicas preventing decommissioning?

2016-11-15 Thread Brahma Reddy Battula
Please check my inline comments to your queries. Hope I have answered all your questions… Regards Brahma Reddy Battula From: Hariharan [mailto:hariharan...@gmail.com] Sent: 15 November 2016 18:55 To: user@hadoop.apache.org Subject: HDFS - Corrupt replicas preventing decommissioning? Hello

Re: Get information of containers - running/killed/completei

2016-11-15 Thread Rohith Sharma K S
Hi Ajay For the running containers, you can get container report from ResourceManager. For completed/killed containers, you need start ApplicationHistoryServer daemon and use the same API i.e yarnClient.getContainerReport() to get container report. Basically, this API contact RM first for