Sorry I forgot to mention. On each node, you will find a script which should 
clean the nodes for you.
/usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py, it will generally 
clean up your cluster

I’ve also notice
/usr/lib/python2.6/site-packages/ambari_agent/DataCleaner.py

Kind regards,
Olivier

From: Kaliyug Antagonist
Reply-To: "[email protected]<mailto:[email protected]>"
Date: Thursday, 6 August 2015 10:22
To: "[email protected]<mailto:[email protected]>"
Subject: Re: Clean un-installation via Ambari

Hi Olivier,

Thanks for the reply.

I had two concerns :


  1.  As I mentioned, I want to un install the entire cluster which means the 9 
nodes where there are datanode directories, nn configs. and previously loaded 
data(less in size), missing blocks(!) etc. should be cleaned i.e I get back the 
9 machines which can now be used for fresh installation of a cluster
  2.  The reset thing will I guess only clear the metadata but the HDFS and 
other components will remain unchanged - I am not sure if this will solve the 
problems that I am facing with my existing cluster

Regards !

On Thu, Aug 6, 2015 at 10:12 AM, Olivier Renault 
<[email protected]<mailto:[email protected]>> wrote:
Log on your ambari-server bring it down and reset it

$ ambari-server stop
$ ambari-server reset

If you are using postgresql – installed and configured by ambari server, you 
should be able to restart. If you are using MySQL or Oracle, you will need to 
drop / re-create the database manually.

$ ambari-server setup
….

Good luck,
Olivier

From: Kaliyug Antagonist
Reply-To: "[email protected]<mailto:[email protected]>"
Date: Thursday, 6 August 2015 10:07
To: "[email protected]<mailto:[email protected]>"
Subject: Clean un-installation via Ambari

I had installed HDP-2.2.4.2-2 using Ambari Version 2.0.0.

There have been several issues in the cluster due to misconfiguration in the 
datanode directories and so on. Now I get several alerts and any MR that I 
execute fails with an error like this :


15/06/01 13:53:44 INFO mapreduce.Job: Job job_1431689151537_0003 running in 
uber mode : false
15/06/01 13:53:44 INFO mapreduce.Job: map 0% reduce 0%
15/06/01 13:53:47 INFO mapreduce.Job: Task Id : 
attempt_1431689151537_0003_m_000000_1000, Status : FAILED
java.io.FileNotFoundException: File /opt/dev/sdb/hadoop/yarn/local/filecache 
does not exist

15/06/01 13:53:51 INFO mapreduce.Job: Task Id : 
attempt_1431689151537_0003_m_000000_1001, Status : FAILED
java.io.FileNotFoundException: File /opt/dev/sdd/hadoop/yarn/local/filecache 
does not exist

15/06/01 13:53:55 INFO mapreduce.Job: Task Id : 
attempt_1431689151537_0003_m_000000_1002, Status : FAILED
java.io.FileNotFoundException: File /opt/dev/sdh/hadoop/yarn/local/filecache 
does not exist


I wish to clean un install the cluster and reinstall it, it is ok even if 
Ambari needs to be uninstalled and reinstalled.

How can I do it ?

Reply via email to