masatana commented on PR #1322:
URL: https://github.com/apache/bigtop/pull/1322#issuecomment-2579516737

   DEB test result
   
   <details>
   Building apt on Ubuntu 24.04
   
   ```
   $ ./gradlew hadoop-clean bigtop-utils-pkg bigtop-jsvc-pkg bigtop-groovy-pkg 
hadoop-pkg repo --stacktrace
   ```
   
   Smoke tests on Ubuntu 24.04
   ```
   $ cd provisioner/docker
   $ ./docker-hadoop.sh --enable-local-repo --disable-gpg-check     
--docker-compose-plugin     -C config_ubuntu-24.04.yaml     -F 
docker-compose-cgroupv2.yml     --stack hdfs,yarn,mapreduce --smoke-tests hdfs 
-c 3
   
   (snip)
   
   
   > Task :bigtop-tests:smoke-tests:hdfs:test
   Finished generating test XML results (0.018 secs) into: 
/bigtop-home/bigtop-tests/smoke-tests/hdfs/build/test-results/test
   Generating HTML test report...
   Finished generating test html results (0.021 secs) into: 
/bigtop-home/bigtop-tests/smoke-tests/hdfs/build/reports/tests/test
   Now testing...
   :bigtop-tests:smoke-tests:hdfs:test (Thread[Execution worker for 
':',5,main]) completed. Took 8 mins 23.416 secs.
   
   BUILD SUCCESSFUL in 8m 58s
   29 actionable tasks: 11 executed, 18 up-to-date
   Stopped 1 worker daemon(s).
   + rm -rf buildSrc/build/test-results/binary
   + rm -rf /bigtop-home/.gradle
   ```
   
   
   Install  additional packages  (journalnode, zkfc, dfsrouter, 
secondarynamenode) 
   
   ```
   $ ./docker-hadoop.sh -dcp --exec 1 /bin/bash
   $ apt install hadoop-hdfs-journalnode hadoop-hdfs-secondarynamenode 
hadoop-hdfs-zkfc hadoop-hdfs-dfsrouter -y
   ```
   
    Check if systemctl works (`systemctl start` & `systemctl status`,)
   
   ```
   $ for service_name in namenode datanode journalnode secondarynamenode zkfc 
dfsrouter; do  systemctl start hadoop-hdfs-$service_name; done
   Job for hadoop-hdfs-zkfc.service failed because the control process exited 
with error code.
   See "systemctl status hadoop-hdfs-zkfc.service" and "journalctl -xe" for 
details.
   ```
   
   ```
   $ for service_name in namenode datanode journalnode secondarynamenode zkfc 
dfsrouter; do  systemctl status hadoop-hdfs-$service_name; done
   
   ● hadoop-hdfs-namenode.service - Hadoop NameNode
        Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-namenode.service; 
static)
        Active: active (running) since Wed 2025-01-08 13:57:10 UTC; 22min ago
          Docs: https://hadoop.apache.org/
      Main PID: 5189 (java)
        CGroup: 
/docker/2e2d226cb5b756925802abb1cd7cd84d9002f6b521043c8111411efa5dea836b/system.slice/hadoop-hdfs-namenode.service
                └─5189 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java 
-Dproc_namenode -Djava.net.preferIPv4Stack=true 
-Dhdfs.audit.logger=INFO,NullAppender -Dcom.sun.management.jmxremote 
-Dyarn.log.dir=/var/log/hadoop-hdfs 
-Dyarn.log.file=hadoop-hdfs-namenode-2e2d226cb5b7.log 
-Dyarn.home.dir=/usr/lib/hadoop-yarn -Dyarn.root.logger=INFO,console 
-Djava.library.path=//usr/lib/hadoop/lib/native 
-Dhadoop.log.dir=/var/log/hadoop-hdfs 
-Dhadoop.log.file=hadoop-hdfs-namenode-2e2d226cb5b7.log 
-Dhadoop.home.dir=//usr/lib/hadoop -Dhadoop.id.str=hdfs 
-Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml 
-Dhadoop.security.logger=INFO,NullAppender 
org.apache.hadoop.hdfs.server.namenode.NameNode
   
   Jan 08 13:57:08 2e2d226cb5b7 systemd[1]: Starting 
hadoop-hdfs-namenode.service - Hadoop NameNode...
   Jan 08 13:57:10 2e2d226cb5b7 systemd[1]: Started 
hadoop-hdfs-namenode.service - Hadoop NameNode.
   ● hadoop-hdfs-datanode.service - Hadoop DataNode
        Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-datanode.service; 
static)
        Active: active (running) since Wed 2025-01-08 13:58:01 UTC; 21min ago
          Docs: https://hadoop.apache.org/
      Main PID: 6536 (java)
        CGroup: 
/docker/2e2d226cb5b756925802abb1cd7cd84d9002f6b521043c8111411efa5dea836b/system.slice/hadoop-hdfs-datanode.service
                └─6536 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java 
-Dproc_datanode -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote 
-Dyarn.log.dir=/var/log/hadoop-hdfs 
-Dyarn.log.file=hadoop-hdfs-datanode-2e2d226cb5b7.log 
-Dyarn.home.dir=/usr/lib/hadoop-yarn -Dyarn.root.logger=INFO,console 
-Djava.library.path=//usr/lib/hadoop/lib/native 
-Dhadoop.log.dir=/var/log/hadoop-hdfs 
-Dhadoop.log.file=hadoop-hdfs-datanode-2e2d226cb5b7.log 
-Dhadoop.home.dir=//usr/lib/hadoop -Dhadoop.id.str=hdfs 
-Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml 
-Dhadoop.security.logger=INFO,NullAppender 
org.apache.hadoop.hdfs.server.datanode.DataNode
   
   Jan 08 13:57:59 2e2d226cb5b7 systemd[1]: Starting 
hadoop-hdfs-datanode.service - Hadoop DataNode...
   Jan 08 13:58:01 2e2d226cb5b7 systemd[1]: Started 
hadoop-hdfs-datanode.service - Hadoop DataNode.
   ● hadoop-hdfs-journalnode.service - Hadoop Journalnode
        Loaded: loaded 
(/usr/lib/systemd/system/hadoop-hdfs-journalnode.service; static)
        Active: active (running) since Wed 2025-01-08 14:16:41 UTC; 2min 52s ago
          Docs: https://hadoop.apache.org/
       Process: 26960 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf 
--daemon start journalnode (code=exited, status=0/SUCCESS)
      Main PID: 26999 (java)
        CGroup: 
/docker/2e2d226cb5b756925802abb1cd7cd84d9002f6b521043c8111411efa5dea836b/system.slice/hadoop-hdfs-journalnode.service
                └─26999 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java 
-Dproc_journalnode -Djava.net.preferIPv4Stack=true 
-Dyarn.log.dir=/var/log/hadoop-hdfs 
-Dyarn.log.file=hadoop-hdfs-journalnode-2e2d226cb5b7.log 
-Dyarn.home.dir=/usr/lib/hadoop-yarn -Dyarn.root.logger=INFO,console 
-Djava.library.path=//usr/lib/hadoop/lib/native 
-Dhadoop.log.dir=/var/log/hadoop-hdfs 
-Dhadoop.log.file=hadoop-hdfs-journalnode-2e2d226cb5b7.log 
-Dhadoop.home.dir=//usr/lib/hadoop -Dhadoop.id.str=hdfs 
-Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml 
-Dhadoop.security.logger=INFO,NullAppender 
org.apache.hadoop.hdfs.qjournal.server.JournalNode
   
   Jan 08 14:16:39 2e2d226cb5b7 systemd[1]: Starting 
hadoop-hdfs-journalnode.service - Hadoop Journalnode...
   Jan 08 14:16:41 2e2d226cb5b7 systemd[1]: Started 
hadoop-hdfs-journalnode.service - Hadoop Journalnode.
   ● hadoop-hdfs-secondarynamenode.service - Hadoop Secondary NameNode
        Loaded: loaded 
(/usr/lib/systemd/system/hadoop-hdfs-secondarynamenode.service; static)
        Active: active (running) since Wed 2025-01-08 14:16:44 UTC; 2min 50s ago
          Docs: https://hadoop.apache.org/
       Process: 27053 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf 
--daemon start secondarynamenode (code=exited, status=0/SUCCESS)
      Main PID: 27092 (java)
        CGroup: 
/docker/2e2d226cb5b756925802abb1cd7cd84d9002f6b521043c8111411efa5dea836b/system.slice/hadoop-hdfs-secondarynamenode.service
                └─27092 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java 
-Dproc_secondarynamenode -Djava.net.preferIPv4Stack=true 
-Dhdfs.audit.logger=INFO,NullAppender -Dcom.sun.management.jmxremote 
-Dyarn.log.dir=/var/log/hadoop-hdfs 
-Dyarn.log.file=hadoop-hdfs-secondarynamenode-2e2d226cb5b7.log 
-Dyarn.home.dir=/usr/lib/hadoop-yarn -Dyarn.root.logger=INFO,console 
-Djava.library.path=//usr/lib/hadoop/lib/native 
-Dhadoop.log.dir=/var/log/hadoop-hdfs 
-Dhadoop.log.file=hadoop-hdfs-secondarynamenode-2e2d226cb5b7.log 
-Dhadoop.home.dir=//usr/lib/hadoop -Dhadoop.id.str=hdfs 
-Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml 
-Dhadoop.security.logger=INFO,NullAppender 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
   
   Jan 08 14:16:42 2e2d226cb5b7 systemd[1]: Starting 
hadoop-hdfs-secondarynamenode.service - Hadoop Secondary NameNode...
   Jan 08 14:16:42 2e2d226cb5b7 hdfs[27053]: WARNING: 
HADOOP_SECONDARYNAMENODE_OPTS has been replaced by HDFS_SECONDARYNAMENODE_OPTS. 
Using value of HADOOP_SECONDARYNAMENODE_OPTS.
   Jan 08 14:16:44 2e2d226cb5b7 systemd[1]: Started 
hadoop-hdfs-secondarynamenode.service - Hadoop Secondary NameNode.
   × hadoop-hdfs-zkfc.service - Hadoop ZKFC
        Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-zkfc.service; 
static)
        Active: failed (Result: exit-code) since Wed 2025-01-08 14:16:46 UTC; 
2min 48s ago
          Docs: https://hadoop.apache.org/
       Process: 27139 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf 
--daemon start zkfc (code=exited, status=1/FAILURE)
   
   Jan 08 14:16:44 2e2d226cb5b7 systemd[1]: Starting hadoop-hdfs-zkfc.service - 
Hadoop ZKFC...
   Jan 08 14:16:45 2e2d226cb5b7 hdfs[27139]: ERROR: Cannot set priority of zkfc 
process 27178
   Jan 08 14:16:46 2e2d226cb5b7 systemd[1]: hadoop-hdfs-zkfc.service: Control 
process exited, code=exited, status=1/FAILURE
   Jan 08 14:16:46 2e2d226cb5b7 systemd[1]: hadoop-hdfs-zkfc.service: Failed 
with result 'exit-code'.
   Jan 08 14:16:46 2e2d226cb5b7 systemd[1]: Failed to start 
hadoop-hdfs-zkfc.service - Hadoop ZKFC.
   ● hadoop-hdfs-dfsrouter.service - Hadoop dfsrouter
        Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-dfsrouter.service; 
static)
        Active: active (running) since Wed 2025-01-08 14:16:48 UTC; 2min 46s ago
          Docs: https://hadoop.apache.org/
       Process: 27213 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf 
--daemon start dfsrouter (code=exited, status=0/SUCCESS)
      Main PID: 27252 (java)
        CGroup: 
/docker/2e2d226cb5b756925802abb1cd7cd84d9002f6b521043c8111411efa5dea836b/system.slice/hadoop-hdfs-dfsrouter.service
                └─27252 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java 
-Dproc_dfsrouter -Djava.net.preferIPv4Stack=true 
-Dyarn.log.dir=/var/log/hadoop-hdfs 
-Dyarn.log.file=hadoop-hdfs-dfsrouter-2e2d226cb5b7.log 
-Dyarn.home.dir=/usr/lib/hadoop-yarn -Dyarn.root.logger=INFO,console 
-Djava.library.path=//usr/lib/hadoop/lib/native 
-Dhadoop.log.dir=/var/log/hadoop-hdfs 
-Dhadoop.log.file=hadoop-hdfs-dfsrouter-2e2d226cb5b7.log 
-Dhadoop.home.dir=//usr/lib/hadoop -Dhadoop.id.str=hdfs 
-Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml 
-Dhadoop.security.logger=INFO,NullAppender 
org.apache.hadoop.hdfs.server.federation.router.DFSRouter
   
   Jan 08 14:16:46 2e2d226cb5b7 systemd[1]: Starting 
hadoop-hdfs-dfsrouter.service - Hadoop dfsrouter...
   Jan 08 14:16:48 2e2d226cb5b7 systemd[1]: Started 
hadoop-hdfs-dfsrouter.service - Hadoop dfsrouter.
   ```
   
   While ZKFC failed to start, we can confirm that it can launch via systemd 
but shutdown immediately because we didn't configure HA settings (see the log 
below). 
   
   ```
   $ cat /var/log/hadoop-hdfs/hadoop-hdfs-zkfc-2e2d226cb5b7.log
   
   (snip)
   
   2025-01-08 14:16:44,631 INFO 
org.apache.hadoop.hdfs.tools.DFSZKFailoverController: registered UNIX signal 
handlers for [TERM, HUP, INT]
   2025-01-08 14:16:44,849 ERROR 
org.apache.hadoop.hdfs.tools.DFSZKFailoverController: DFSZKFailOverController 
exiting due to earlier exception 
org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this 
namenode.
   2025-01-08 14:16:44,851 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
status 1: org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled 
for this namenode.
   2025-01-08 14:16:44,853 INFO 
org.apache.hadoop.hdfs.tools.DFSZKFailoverController: SHUTDOWN_MSG:
   /************************************************************
   SHUTDOWN_MSG: Shutting down DFSZKFailoverController at 
2e2d226cb5b7.bigtop.apache.org/172.19.0.3
   ************************************************************/
   ```
   
   Check if prepared unit files are used (`systemctl cat`)
   
   ```
   $ for service_name in namenode datanode journalnode secondarynamenode zkfc 
dfsrouter; do  systemctl cat hadoop-hdfs-$service_name; done
   
   root@2e2d226cb5b7:/# for service_name in namenode datanode journalnode 
secondarynamenode zkfc dfsrouter; do  systemctl cat hadoop-hdfs-$service_name; 
done
   # /usr/lib/systemd/system/hadoop-hdfs-namenode.service
   # Licensed to the Apache Software Foundation (ASF) under one or more
   # contributor license agreements.  See the NOTICE file distributed with
   # this work for additional information regarding copyright ownership.
   # The ASF licenses this file to You under the Apache License, Version 2.0
   # (the "License"); you may not use this file except in compliance with
   # the License.  You may obtain a copy of the License at
   #
   #     http://www.apache.org/licenses/LICENSE-2.0
   #
   # Unless required by applicable law or agreed to in writing, software
   # distributed under the License is distributed on an "AS IS" BASIS,
   # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   # See the License for the specific language governing permissions and
   # limitations under the License.
   
   [Unit]
   Documentation=https://hadoop.apache.org/
   Description=Hadoop NameNode
   Before=multi-user.target
   Before=graphical.target
   After=remote-fs.target
   
   [Service]
   User=hdfs
   Group=hdfs
   Type=forking
   Restart=no
   TimeoutSec=5min
   IgnoreSIGPIPE=no
   KillMode=process
   RemainAfterExit=no
   SuccessExitStatus=5 6
   ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon start namenode
   ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon stop namenode
   # /usr/lib/systemd/system/hadoop-hdfs-datanode.service
   # Licensed to the Apache Software Foundation (ASF) under one or more
   # contributor license agreements.  See the NOTICE file distributed with
   # this work for additional information regarding copyright ownership.
   # The ASF licenses this file to You under the Apache License, Version 2.0
   # (the "License"); you may not use this file except in compliance with
   # the License.  You may obtain a copy of the License at
   #
   #     http://www.apache.org/licenses/LICENSE-2.0
   #
   # Unless required by applicable law or agreed to in writing, software
   # distributed under the License is distributed on an "AS IS" BASIS,
   # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   # See the License for the specific language governing permissions and
   # limitations under the License.
   
   [Unit]
   Documentation=https://hadoop.apache.org/
   Description=Hadoop DataNode
   Before=multi-user.target
   Before=graphical.target
   After=remote-fs.target
   
   [Service]
   User=hdfs
   Group=hdfs
   Type=forking
   Restart=no
   TimeoutSec=5min
   IgnoreSIGPIPE=no
   KillMode=process
   RemainAfterExit=no
   SuccessExitStatus=5 6
   ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon start datanode
   ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon stop datanode
   # /usr/lib/systemd/system/hadoop-hdfs-journalnode.service
   # Licensed to the Apache Software Foundation (ASF) under one or more
   # contributor license agreements.  See the NOTICE file distributed with
   # this work for additional information regarding copyright ownership.
   # The ASF licenses this file to You under the Apache License, Version 2.0
   # (the "License"); you may not use this file except in compliance with
   # the License.  You may obtain a copy of the License at
   #
   #     http://www.apache.org/licenses/LICENSE-2.0
   #
   # Unless required by applicable law or agreed to in writing, software
   # distributed under the License is distributed on an "AS IS" BASIS,
   # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   # See the License for the specific language governing permissions and
   # limitations under the License.
   
   [Unit]
   Documentation=https://hadoop.apache.org/
   Description=Hadoop Journalnode
   Before=multi-user.target
   Before=graphical.target
   After=remote-fs.target
   
   [Service]
   User=hdfs
   Group=hdfs
   Type=forking
   Restart=no
   TimeoutSec=5min
   IgnoreSIGPIPE=no
   KillMode=process
   RemainAfterExit=no
   SuccessExitStatus=5 6
   ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon start journalnode
   ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon stop journalnode
   # /usr/lib/systemd/system/hadoop-hdfs-secondarynamenode.service
   # Licensed to the Apache Software Foundation (ASF) under one or more
   # contributor license agreements.  See the NOTICE file distributed with
   # this work for additional information regarding copyright ownership.
   # The ASF licenses this file to You under the Apache License, Version 2.0
   # (the "License"); you may not use this file except in compliance with
   # the License.  You may obtain a copy of the License at
   #
   #     http://www.apache.org/licenses/LICENSE-2.0
   #
   # Unless required by applicable law or agreed to in writing, software
   # distributed under the License is distributed on an "AS IS" BASIS,
   # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   # See the License for the specific language governing permissions and
   # limitations under the License.
   
   [Unit]
   Documentation=https://hadoop.apache.org/
   Description=Hadoop Secondary NameNode
   Before=multi-user.target
   Before=graphical.target
   After=remote-fs.target
   
   [Service]
   User=hdfs
   Group=hdfs
   Type=forking
   Restart=no
   TimeoutSec=5min
   IgnoreSIGPIPE=no
   KillMode=process
   RemainAfterExit=no
   SuccessExitStatus=5 6
   ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon start 
secondarynamenode
   ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon stop 
secondarynamenode
   # /usr/lib/systemd/system/hadoop-hdfs-zkfc.service
   # Licensed to the Apache Software Foundation (ASF) under one or more
   # contributor license agreements.  See the NOTICE file distributed with
   # this work for additional information regarding copyright ownership.
   # The ASF licenses this file to You under the Apache License, Version 2.0
   # (the "License"); you may not use this file except in compliance with
   # the License.  You may obtain a copy of the License at
   #
   #     http://www.apache.org/licenses/LICENSE-2.0
   #
   # Unless required by applicable law or agreed to in writing, software
   # distributed under the License is distributed on an "AS IS" BASIS,
   # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   # See the License for the specific language governing permissions and
   # limitations under the License.
   
   [Unit]
   Documentation=https://hadoop.apache.org/
   Description=Hadoop ZKFC
   Before=multi-user.target
   Before=graphical.target
   After=remote-fs.target
   
   [Service]
   User=hdfs
   Group=hdfs
   Type=forking
   Restart=no
   TimeoutSec=5min
   IgnoreSIGPIPE=no
   KillMode=process
   RemainAfterExit=no
   SuccessExitStatus=5 6
   ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon start zkfc
   ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon stop zkfc
   # /usr/lib/systemd/system/hadoop-hdfs-dfsrouter.service
   # Licensed to the Apache Software Foundation (ASF) under one or more
   # contributor license agreements.  See the NOTICE file distributed with
   # this work for additional information regarding copyright ownership.
   # The ASF licenses this file to You under the Apache License, Version 2.0
   # (the "License"); you may not use this file except in compliance with
   # the License.  You may obtain a copy of the License at
   #
   #     http://www.apache.org/licenses/LICENSE-2.0
   #
   # Unless required by applicable law or agreed to in writing, software
   # distributed under the License is distributed on an "AS IS" BASIS,
   # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   # See the License for the specific language governing permissions and
   # limitations under the License.
   
   [Unit]
   Documentation=https://hadoop.apache.org/
   Description=Hadoop dfsrouter
   Before=multi-user.target
   Before=graphical.target
   After=remote-fs.target
   
   [Service]
   User=hdfs
   Group=hdfs
   Type=forking
   Restart=no
   TimeoutSec=5min
   IgnoreSIGPIPE=no
   KillMode=process
   RemainAfterExit=no
   SuccessExitStatus=5 6
   ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon start dfsrouter
   ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon stop dfsrouter
   ```
   
   Check if they work after restart (container restart, `systemctl start`)
   
   ```
   $ docker restart
   $ for service_name in namenode datanode journalnode secondarynamenode zkfc 
dfsrouter; do  systemctl start hadoop-hdfs-$service_name; done
   $ for service_name in namenode datanode journalnode secondarynamenode zkfc 
dfsrouter; do systemctl status hadoop-hdfs-$service_name; done
   
   ● hadoop-hdfs-namenode.service - Hadoop NameNode
        Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-namenode.service; 
static)
        Active: active (running) since Wed 2025-01-08 14:31:23 UTC; 22s ago
          Docs: https://hadoop.apache.org/
       Process: 149 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
start namenode (code=exited, status=0/SUCCESS)
      Main PID: 189 (java)
        CGroup: 
/docker/2e2d226cb5b756925802abb1cd7cd84d9002f6b521043c8111411efa5dea836b/system.slice/hadoop-hdfs-namenode.service
                └─189 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java 
-Dproc_namenode -Djava.net.preferIPv4Stack=true 
-Dhdfs.audit.logger=INFO,NullAppender -Dcom.sun.management.jmxremote 
-Dyarn.log.dir=/var/log/hadoop-hdfs 
-Dyarn.log.file=hadoop-hdfs-namenode-2e2d226cb5b7.log 
-Dyarn.home.dir=/usr/lib/hadoop-yarn -Dyarn.root.logger=INFO,console 
-Djava.library.path=//usr/lib/hadoop/lib/native 
-Dhadoop.log.dir=/var/log/hadoop-hdfs 
-Dhadoop.log.file=hadoop-hdfs-namenode-2e2d226cb5b7.log 
-Dhadoop.home.dir=//usr/lib/hadoop -Dhadoop.id.str=hdfs 
-Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml 
-Dhadoop.security.logger=INFO,NullAppender 
org.apache.hadoop.hdfs.server.namenode.NameNode
   
   Jan 08 14:31:21 2e2d226cb5b7 systemd[1]: Starting 
hadoop-hdfs-namenode.service - Hadoop NameNode...
   Jan 08 14:31:23 2e2d226cb5b7 systemd[1]: Started 
hadoop-hdfs-namenode.service - Hadoop NameNode.
   ● hadoop-hdfs-datanode.service - Hadoop DataNode
        Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-datanode.service; 
static)
        Active: active (running) since Wed 2025-01-08 14:31:25 UTC; 20s ago
          Docs: https://hadoop.apache.org/
       Process: 237 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
start datanode (code=exited, status=0/SUCCESS)
      Main PID: 278 (java)
        CGroup: 
/docker/2e2d226cb5b756925802abb1cd7cd84d9002f6b521043c8111411efa5dea836b/system.slice/hadoop-hdfs-datanode.service
                └─278 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java 
-Dproc_datanode -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote 
-Dyarn.log.dir=/var/log/hadoop-hdfs 
-Dyarn.log.file=hadoop-hdfs-datanode-2e2d226cb5b7.log 
-Dyarn.home.dir=/usr/lib/hadoop-yarn -Dyarn.root.logger=INFO,console 
-Djava.library.path=//usr/lib/hadoop/lib/native 
-Dhadoop.log.dir=/var/log/hadoop-hdfs 
-Dhadoop.log.file=hadoop-hdfs-datanode-2e2d226cb5b7.log 
-Dhadoop.home.dir=//usr/lib/hadoop -Dhadoop.id.str=hdfs 
-Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml 
-Dhadoop.security.logger=INFO,NullAppender 
org.apache.hadoop.hdfs.server.datanode.DataNode
   
   Jan 08 14:31:23 2e2d226cb5b7 systemd[1]: Starting 
hadoop-hdfs-datanode.service - Hadoop DataNode...
   Jan 08 14:31:25 2e2d226cb5b7 systemd[1]: Started 
hadoop-hdfs-datanode.service - Hadoop DataNode.
   ● hadoop-hdfs-journalnode.service - Hadoop Journalnode
        Loaded: loaded 
(/usr/lib/systemd/system/hadoop-hdfs-journalnode.service; static)
        Active: active (running) since Wed 2025-01-08 14:31:27 UTC; 18s ago
          Docs: https://hadoop.apache.org/
       Process: 397 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
start journalnode (code=exited, status=0/SUCCESS)
      Main PID: 439 (java)
        CGroup: 
/docker/2e2d226cb5b756925802abb1cd7cd84d9002f6b521043c8111411efa5dea836b/system.slice/hadoop-hdfs-journalnode.service
                └─439 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java 
-Dproc_journalnode -Djava.net.preferIPv4Stack=true 
-Dyarn.log.dir=/var/log/hadoop-hdfs 
-Dyarn.log.file=hadoop-hdfs-journalnode-2e2d226cb5b7.log 
-Dyarn.home.dir=/usr/lib/hadoop-yarn -Dyarn.root.logger=INFO,console 
-Djava.library.path=//usr/lib/hadoop/lib/native 
-Dhadoop.log.dir=/var/log/hadoop-hdfs 
-Dhadoop.log.file=hadoop-hdfs-journalnode-2e2d226cb5b7.log 
-Dhadoop.home.dir=//usr/lib/hadoop -Dhadoop.id.str=hdfs 
-Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml 
-Dhadoop.security.logger=INFO,NullAppender 
org.apache.hadoop.hdfs.qjournal.server.JournalNode
   
   Jan 08 14:31:25 2e2d226cb5b7 systemd[1]: Starting 
hadoop-hdfs-journalnode.service - Hadoop Journalnode...
   Jan 08 14:31:27 2e2d226cb5b7 systemd[1]: Started 
hadoop-hdfs-journalnode.service - Hadoop Journalnode.
   ● hadoop-hdfs-secondarynamenode.service - Hadoop Secondary NameNode
        Loaded: loaded 
(/usr/lib/systemd/system/hadoop-hdfs-secondarynamenode.service; static)
        Active: active (running) since Wed 2025-01-08 14:31:29 UTC; 16s ago
          Docs: https://hadoop.apache.org/
       Process: 520 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
start secondarynamenode (code=exited, status=0/SUCCESS)
      Main PID: 560 (java)
        CGroup: 
/docker/2e2d226cb5b756925802abb1cd7cd84d9002f6b521043c8111411efa5dea836b/system.slice/hadoop-hdfs-secondarynamenode.service
                └─560 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java 
-Dproc_secondarynamenode -Djava.net.preferIPv4Stack=true 
-Dhdfs.audit.logger=INFO,NullAppender -Dcom.sun.management.jmxremote 
-Dyarn.log.dir=/var/log/hadoop-hdfs 
-Dyarn.log.file=hadoop-hdfs-secondarynamenode-2e2d226cb5b7.log 
-Dyarn.home.dir=/usr/lib/hadoop-yarn -Dyarn.root.logger=INFO,console 
-Djava.library.path=//usr/lib/hadoop/lib/native 
-Dhadoop.log.dir=/var/log/hadoop-hdfs 
-Dhadoop.log.file=hadoop-hdfs-secondarynamenode-2e2d226cb5b7.log 
-Dhadoop.home.dir=//usr/lib/hadoop -Dhadoop.id.str=hdfs 
-Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml 
-Dhadoop.security.logger=INFO,NullAppender 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
   
   Jan 08 14:31:27 2e2d226cb5b7 systemd[1]: Starting 
hadoop-hdfs-secondarynamenode.service - Hadoop Secondary NameNode...
   Jan 08 14:31:27 2e2d226cb5b7 hdfs[520]: WARNING: 
HADOOP_SECONDARYNAMENODE_OPTS has been replaced by HDFS_SECONDARYNAMENODE_OPTS. 
Using value of HADOOP_SECONDARYNAMENODE_OPTS.
   Jan 08 14:31:29 2e2d226cb5b7 systemd[1]: Started 
hadoop-hdfs-secondarynamenode.service - Hadoop Secondary NameNode.
   × hadoop-hdfs-zkfc.service - Hadoop ZKFC
        Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-zkfc.service; 
static)
        Active: failed (Result: exit-code) since Wed 2025-01-08 14:31:31 UTC; 
13s ago
          Docs: https://hadoop.apache.org/
       Process: 607 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
start zkfc (code=exited, status=1/FAILURE)
   
   Jan 08 14:31:29 2e2d226cb5b7 systemd[1]: Starting hadoop-hdfs-zkfc.service - 
Hadoop ZKFC...
   Jan 08 14:31:30 2e2d226cb5b7 hdfs[607]: ERROR: Cannot set priority of zkfc 
process 647
   Jan 08 14:31:31 2e2d226cb5b7 systemd[1]: hadoop-hdfs-zkfc.service: Control 
process exited, code=exited, status=1/FAILURE
   Jan 08 14:31:31 2e2d226cb5b7 systemd[1]: hadoop-hdfs-zkfc.service: Failed 
with result 'exit-code'.
   Jan 08 14:31:31 2e2d226cb5b7 systemd[1]: Failed to start 
hadoop-hdfs-zkfc.service - Hadoop ZKFC.
   ● hadoop-hdfs-dfsrouter.service - Hadoop dfsrouter
        Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-dfsrouter.service; 
static)
        Active: active (running) since Wed 2025-01-08 14:31:33 UTC; 11s ago
          Docs: https://hadoop.apache.org/
       Process: 682 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
start dfsrouter (code=exited, status=0/SUCCESS)
      Main PID: 722 (java)
        CGroup: 
/docker/2e2d226cb5b756925802abb1cd7cd84d9002f6b521043c8111411efa5dea836b/system.slice/hadoop-hdfs-dfsrouter.service
                └─722 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java 
-Dproc_dfsrouter -Djava.net.preferIPv4Stack=true 
-Dyarn.log.dir=/var/log/hadoop-hdfs 
-Dyarn.log.file=hadoop-hdfs-dfsrouter-2e2d226cb5b7.log 
-Dyarn.home.dir=/usr/lib/hadoop-yarn -Dyarn.root.logger=INFO,console 
-Djava.library.path=//usr/lib/hadoop/lib/native 
-Dhadoop.log.dir=/var/log/hadoop-hdfs 
-Dhadoop.log.file=hadoop-hdfs-dfsrouter-2e2d226cb5b7.log 
-Dhadoop.home.dir=//usr/lib/hadoop -Dhadoop.id.str=hdfs 
-Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml 
-Dhadoop.security.logger=INFO,NullAppender 
org.apache.hadoop.hdfs.server.federation.router.DFSRouter
   
   Jan 08 14:31:31 2e2d226cb5b7 systemd[1]: Starting 
hadoop-hdfs-dfsrouter.service - Hadoop dfsrouter...
   Jan 08 14:31:33 2e2d226cb5b7 systemd[1]: Started 
hadoop-hdfs-dfsrouter.service - Hadoop dfsrouter.
   ```
   
   Check if we can stop the service (`systemctl stop`)
   
   ```
   root@2e2d226cb5b7:/# ls -lah  /run/hadoop-hdfs
   total 24K
   drwxr-xr-x  2 hdfs hadoop 160 Jan  8 14:31 .
   drwxr-xr-x 16 root root   380 Jan  8 14:30 ..
   -rw-r--r--  1 hdfs hdfs     4 Jan  8 14:31 hadoop-hdfs-datanode.pid
   -rw-r--r--  1 hdfs hdfs     4 Jan  8 14:31 hadoop-hdfs-dfsrouter.pid
   -rw-r--r--  1 hdfs hdfs     4 Jan  8 14:31 hadoop-hdfs-journalnode.pid
   -rw-r--r--  1 hdfs hdfs     4 Jan  8 14:31 hadoop-hdfs-namenode.pid
   -rw-r--r--  1 hdfs hdfs     4 Jan  8 14:31 hadoop-hdfs-secondarynamenode.pid
   -rw-r--r--  1 hdfs hdfs     4 Jan  8 14:31 hadoop-hdfs-zkfc.pid
   root@2e2d226cb5b7:/# for service_name in namenode datanode journalnode 
secondarynamenode zkfc dfsrouter; do  systemctl stop hadoop-hdfs-$service_name; 
done
   root@2e2d226cb5b7:/# for service_name in namenode datanode journalnode 
secondarynamenode zkfc dfsrouter; do  systemctl status 
hadoop-hdfs-$service_name; done
   × hadoop-hdfs-namenode.service - Hadoop NameNode
        Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-namenode.service; 
static)
        Active: failed (Result: exit-code) since Wed 2025-01-08 14:33:28 UTC; 
14s ago
      Duration: 2min 3.941s
          Docs: https://hadoop.apache.org/
       Process: 149 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
start namenode (code=exited, status=0/SUCCESS)
       Process: 828 ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
stop namenode (code=exited, status=0/SUCCESS)
      Main PID: 189 (code=exited, status=143)
   
   Jan 08 14:31:21 2e2d226cb5b7 systemd[1]: Starting 
hadoop-hdfs-namenode.service - Hadoop NameNode...
   Jan 08 14:31:23 2e2d226cb5b7 systemd[1]: Started 
hadoop-hdfs-namenode.service - Hadoop NameNode.
   Jan 08 14:33:27 2e2d226cb5b7 systemd[1]: Stopping 
hadoop-hdfs-namenode.service - Hadoop NameNode...
   Jan 08 14:33:27 2e2d226cb5b7 systemd[1]: hadoop-hdfs-namenode.service: Main 
process exited, code=exited, status=143/n/a
   Jan 08 14:33:28 2e2d226cb5b7 systemd[1]: hadoop-hdfs-namenode.service: 
Failed with result 'exit-code'.
   Jan 08 14:33:28 2e2d226cb5b7 systemd[1]: Stopped 
hadoop-hdfs-namenode.service - Hadoop NameNode.
   × hadoop-hdfs-datanode.service - Hadoop DataNode
        Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-datanode.service; 
static)
        Active: failed (Result: exit-code) since Wed 2025-01-08 14:33:29 UTC; 
13s ago
      Duration: 2min 2.940s
          Docs: https://hadoop.apache.org/
       Process: 237 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
start datanode (code=exited, status=0/SUCCESS)
       Process: 876 ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
stop datanode (code=exited, status=0/SUCCESS)
      Main PID: 278 (code=exited, status=143)
   
   Jan 08 14:31:23 2e2d226cb5b7 systemd[1]: Starting 
hadoop-hdfs-datanode.service - Hadoop DataNode...
   Jan 08 14:31:25 2e2d226cb5b7 systemd[1]: Started 
hadoop-hdfs-datanode.service - Hadoop DataNode.
   Jan 08 14:33:28 2e2d226cb5b7 systemd[1]: Stopping 
hadoop-hdfs-datanode.service - Hadoop DataNode...
   Jan 08 14:33:28 2e2d226cb5b7 systemd[1]: hadoop-hdfs-datanode.service: Main 
process exited, code=exited, status=143/n/a
   Jan 08 14:33:29 2e2d226cb5b7 systemd[1]: hadoop-hdfs-datanode.service: 
Failed with result 'exit-code'.
   Jan 08 14:33:29 2e2d226cb5b7 systemd[1]: Stopped 
hadoop-hdfs-datanode.service - Hadoop DataNode.
   × hadoop-hdfs-journalnode.service - Hadoop Journalnode
        Loaded: loaded 
(/usr/lib/systemd/system/hadoop-hdfs-journalnode.service; static)
        Active: failed (Result: exit-code) since Wed 2025-01-08 14:33:30 UTC; 
12s ago
      Duration: 2min 1.939s
          Docs: https://hadoop.apache.org/
       Process: 397 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
start journalnode (code=exited, status=0/SUCCESS)
       Process: 926 ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
stop journalnode (code=exited, status=0/SUCCESS)
      Main PID: 439 (code=exited, status=143)
   
   Jan 08 14:31:25 2e2d226cb5b7 systemd[1]: Starting 
hadoop-hdfs-journalnode.service - Hadoop Journalnode...
   Jan 08 14:31:27 2e2d226cb5b7 systemd[1]: Started 
hadoop-hdfs-journalnode.service - Hadoop Journalnode.
   Jan 08 14:33:29 2e2d226cb5b7 systemd[1]: Stopping 
hadoop-hdfs-journalnode.service - Hadoop Journalnode...
   Jan 08 14:33:29 2e2d226cb5b7 systemd[1]: hadoop-hdfs-journalnode.service: 
Main process exited, code=exited, status=143/n/a
   Jan 08 14:33:30 2e2d226cb5b7 systemd[1]: hadoop-hdfs-journalnode.service: 
Failed with result 'exit-code'.
   Jan 08 14:33:30 2e2d226cb5b7 systemd[1]: Stopped 
hadoop-hdfs-journalnode.service - Hadoop Journalnode.
   × hadoop-hdfs-secondarynamenode.service - Hadoop Secondary NameNode
        Loaded: loaded 
(/usr/lib/systemd/system/hadoop-hdfs-secondarynamenode.service; static)
        Active: failed (Result: exit-code) since Wed 2025-01-08 14:33:31 UTC; 
10s ago
      Duration: 2min 932ms
          Docs: https://hadoop.apache.org/
       Process: 520 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
start secondarynamenode (code=exited, status=0/SUCCESS)
       Process: 974 ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
stop secondarynamenode (code=exited, status=0/SUCCESS)
      Main PID: 560 (code=exited, status=143)
   
   Jan 08 14:31:27 2e2d226cb5b7 systemd[1]: Starting 
hadoop-hdfs-secondarynamenode.service - Hadoop Secondary NameNode...
   Jan 08 14:31:27 2e2d226cb5b7 hdfs[520]: WARNING: 
HADOOP_SECONDARYNAMENODE_OPTS has been replaced by HDFS_SECONDARYNAMENODE_OPTS. 
Using value of HADOOP_SECONDARYNAMENODE_OPTS.
   Jan 08 14:31:29 2e2d226cb5b7 systemd[1]: Started 
hadoop-hdfs-secondarynamenode.service - Hadoop Secondary NameNode.
   Jan 08 14:33:30 2e2d226cb5b7 systemd[1]: Stopping 
hadoop-hdfs-secondarynamenode.service - Hadoop Secondary NameNode...
   Jan 08 14:33:30 2e2d226cb5b7 hdfs[974]: WARNING: 
HADOOP_SECONDARYNAMENODE_OPTS has been replaced by HDFS_SECONDARYNAMENODE_OPTS. 
Using value of HADOOP_SECONDARYNAMENODE_OPTS.
   Jan 08 14:33:30 2e2d226cb5b7 systemd[1]: 
hadoop-hdfs-secondarynamenode.service: Main process exited, code=exited, 
status=143/n/a
   Jan 08 14:33:31 2e2d226cb5b7 systemd[1]: 
hadoop-hdfs-secondarynamenode.service: Failed with result 'exit-code'.
   Jan 08 14:33:31 2e2d226cb5b7 systemd[1]: Stopped 
hadoop-hdfs-secondarynamenode.service - Hadoop Secondary NameNode.
   × hadoop-hdfs-zkfc.service - Hadoop ZKFC
        Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-zkfc.service; 
static)
        Active: failed (Result: exit-code) since Wed 2025-01-08 14:31:31 UTC; 
2min 10s ago
          Docs: https://hadoop.apache.org/
       Process: 607 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
start zkfc (code=exited, status=1/FAILURE)
   
   Jan 08 14:31:29 2e2d226cb5b7 systemd[1]: Starting hadoop-hdfs-zkfc.service - 
Hadoop ZKFC...
   Jan 08 14:31:30 2e2d226cb5b7 hdfs[607]: ERROR: Cannot set priority of zkfc 
process 647
   Jan 08 14:31:31 2e2d226cb5b7 systemd[1]: hadoop-hdfs-zkfc.service: Control 
process exited, code=exited, status=1/FAILURE
   Jan 08 14:31:31 2e2d226cb5b7 systemd[1]: hadoop-hdfs-zkfc.service: Failed 
with result 'exit-code'.
   Jan 08 14:31:31 2e2d226cb5b7 systemd[1]: Failed to start 
hadoop-hdfs-zkfc.service - Hadoop ZKFC.
   × hadoop-hdfs-dfsrouter.service - Hadoop dfsrouter
        Loaded: loaded (/usr/lib/systemd/system/hadoop-hdfs-dfsrouter.service; 
static)
        Active: failed (Result: exit-code) since Wed 2025-01-08 14:33:32 UTC; 
9s ago
      Duration: 1min 57.833s
          Docs: https://hadoop.apache.org/
       Process: 682 ExecStart=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
start dfsrouter (code=exited, status=0/SUCCESS)
       Process: 1024 ExecStop=/usr/bin/hdfs --config /etc/hadoop/conf --daemon 
stop dfsrouter (code=exited, status=0/SUCCESS)
      Main PID: 722 (code=exited, status=143)
   
   Jan 08 14:31:31 2e2d226cb5b7 systemd[1]: Starting 
hadoop-hdfs-dfsrouter.service - Hadoop dfsrouter...
   Jan 08 14:31:33 2e2d226cb5b7 systemd[1]: Started 
hadoop-hdfs-dfsrouter.service - Hadoop dfsrouter.
   Jan 08 14:33:31 2e2d226cb5b7 systemd[1]: Stopping 
hadoop-hdfs-dfsrouter.service - Hadoop dfsrouter...
   Jan 08 14:33:31 2e2d226cb5b7 systemd[1]: hadoop-hdfs-dfsrouter.service: Main 
process exited, code=exited, status=143/n/a
   Jan 08 14:33:32 2e2d226cb5b7 systemd[1]: hadoop-hdfs-dfsrouter.service: 
Failed with result 'exit-code'.
   Jan 08 14:33:32 2e2d226cb5b7 systemd[1]: Stopped 
hadoop-hdfs-dfsrouter.service - Hadoop dfsrouter.
   ```
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to