Thanks Yuqi,


I will deploy dockers and will report back.


The repository url is pointing to my local repository, which i reposynced from 
the bigtop repo, also i managed to install a single node cluster with no 
problem, but while adding new datanode, it throws warning and would not finish 
installations most of the time, with no trackable logs.



I also was looking for some hints to debug the process of adding datanode.








Sent using https://www.zoho.com/mail/








---- On Mon, 29 May 2023 06:02:45 +0330 Yuqi Gu <guy...@apache.org> wrote ---



We have implemented the Bigtop stack provisioner with Ambari 2.7.5 to set up a 
cluster with some simple scripts. You can find the scripts here:

https://github.com/apache/bigtop/blob/master/provisioner/bigtop-stack-provisioner/docker/centos7/build-containers.sh



Would you like to take a look and confirm that your development environment has 
all the necessary tools and dependency packages? 

Also, ensure that the repository URL in Ambari web UI is set to the 3.2.0 
release (http://repos.bigtop.apache.org/releases/3.2.0/centos/7/x86_64).



BRs,

Yuqi






onmstester onmstester via user <mailto:user@bigtop.apache.org> 于2023年5月27日周六 
19:35写道:









Hi, 



I'm using Apache Ambari 2.7.5 to setup and maintain a Hadoop cluster on Centos 
7.9. Already managed to add a namenode and a datanode to local host (of 
ambari-master) but failed to add another datanode, it fails with warning 
encountered and in  /var/lib/ambari-agent/data/output-466.txt the last lines 
are:

2023-05-14 08:20:01,741 - FS Type: HDFS

2023-05-14 08:20:01,741 - XmlConfig['core-site.xml'] {'group': 'hadoop', 
'conf_dir': '/etc/hadoop/conf', 'xml_include_file': None, 
'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 
'hdfs', 'only_if': 'ls /etc/hadoop/conf', 'configurations': ...}

2023-05-14 08:20:01,755 - Generating config: /etc/hadoop/conf/core-site.xml

2023-05-14 08:20:01,755 - File['/etc/hadoop/conf/core-site.xml'] {'owner': 
'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 
'encoding': 'UTF-8'}

2023-05-14 08:20:01,769 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] 
{'create_parents': True, 'mode': 0755, 'cd_access': 'a'}

2023-05-14 08:20:01,769 - No logsearch configuration exists at 
/var/lib/ambari-agent/cache/stacks/BGTP/1.0/services/HDFS/package/templates/input.config-hdfs.json.j2

2023-05-14 08:20:01,769 - Could not load 'version' from 
/var/lib/ambari-agent/data/structured-out-466.json



Since all necessary packages was already installed by ambari agent and all 
configs been generated, i tried to start the datanode manually using "hdfs 
--daemon start datanode" after creatig hdfs dir in /home and chown the hdfs 
directory and it successfully started and registered to namenode.



How could i find out why ambari does not finish the task of adding the 
datanode? How could i find out the steps that ambari is executing to install 
datanode and in which step its stucking?





Thanks in advance

Reply via email to