[ 
https://issues.apache.org/jira/browse/BIGTOP-1129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807705#comment-13807705
 ] 

Bruno Mahé commented on BIGTOP-1129:
------------------------------------

I tried again tonight on some new instances and I can still reproduce it.
I used the standard amazon ami with our centos 6 repo.

{noformat}
    1  wget 
http://mirrors.kernel.org/fedora-epel/6/i386/epel-release-6-8.noarch.rpm
    2  sudo rpm -Uvh epel-release-6-8.noarch.rpm 
    3  sudo vim /etc/yum.repos.d/epel.repo 
    4  sudo yum install nethogs sysstat htop tree 
java-1.6.0-openjdk-devel.x86_64
    5  sudo yum search hadoop
    6  sudo yum install hadoop-conf-pseudo.x86_64
    7  chkconfig --list
   12  pushd /etc/yum.repos.d/
   13  sudo wget 
http://bigtop01.cloudera.org:8080/view/Releases/job/Bigtop-0.7.0/label=centos6/lastSuccessfulBuild/artifact/output/bigtop.repo
   14  sudo vim bigtop.repo 
   15  yum search hadoop
   16  sudo yum install hadoop-client.x86_64 hadoop-hdfs-namenode.x86_64 
hadoop-mapreduce.x86_64 hadoop-yarn.x86_64 
hadoop-mapreduce-historyserver.x86_64 hadoop-yarn-resourcemanager.x86_64
   32  sudo cp namenode/* /etc/hadoop/conf/
   39  sudo umount /media/ephemeral0/
   40  sudo fdisk /dev/sdb
   41  sudo fdisk /dev/sdc
   42  sudo mkfs.ext4 /dev/sdb
   43  sudo mkfs.ext4 /dev/sdc
   44   sudo mkdir /local/data0
   45   sudo mkdir /local/data1
   46   sudo mount -t ext4 /dev/sdb /local/data0/
   47   sudo mount -t ext4 /dev/sdc /local/data1
   48  ls /var/lib/hadoop-hdfs/cache
   49   sudo mkdir -p /local/data1/hadoop/hdfs
   50   sudo chown -R hdfs /local/data1/hadoop/hdfs
   51   sudo cp -a /var/lib/hadoop-hdfs /local/data1/hadoop/hdfs/
   52   ls -al /local/data1/hadoop/hdfs/hadoop-hdfs/cache/
   53   ls -al /local/data1/hadoop/hdfs/hadoop-hdfs/
   54   ls -al /local/data0/hadoop/hdfs/hadoop-hdfs/cache/
   55   ls -al /local/data0/hadoop/hdfs/hadoop-hdfs/
   56   sudo mkdir -p /local/data0/hadoop/hdfs
   57   sudo chown -R hdfs /local/data1/hadoop/hdfs
   58   sudo cp -a /var/lib/hadoop-hdfs /local/data0/hadoop/hdfs/
   59   ls -al /local/data0/hadoop/hdfs/hadoop-hdfs/cache/
   60   ls -al /local/data0/hadoop/hdfs
   61   ls -al /local/data1/hadoop/hdfs
   62  sudo vim /etc/hadoop/conf/core-site.xml 
   63  sudo vim /etc/hadoop/conf/hdfs-site.xml 
   64  sudo vim /etc/hadoop/conf/hadoop-env.sh 
   65  sudo /etc/init.d/hadoop-hdfs-namenode status
   66  sudo /etc/init.d/hadoop-hdfs-namenode start
   67  sudo /etc/init.d/hadoop-hdfs-namenode status
   68  less /var/log/hadoop-hdfs/hadoop-hdfs-namenode-ip-172-31-34-231.log 
   69  ls -al /local/data0/hadoop/hdfs/hadoop-hdfs/cache/hdfs/dfs/name
   70  ls -al /local/data0/hadoop/hdfs/hadoop-hdfs/cache/
   71  sudo /etc/init.d/hadoop-hdfs-namenode init
   72  sudo /etc/init.d/hadoop-hdfs-namenode status
   73  sudo /etc/init.d/hadoop-hdfs-namenode start
   74  sudo /etc/init.d/hadoop-hdfs-namenode status
   75  ps auxww | grep nameno
   76  sudo /etc/init.d/hadoop-hdfs-namenode stop
   77  ps auxww | grep nameno
{noformat}


core-site.xml:
{noformat}
[ec2-user@ip-172-31-34-231 ~]$ cat /etc/hadoop/conf/core-site.xml 
<?xml version="1.0"?>
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
  <property>
    <name>fs.default.name</name>
    
<value>hdfs://ec2-54-200-186-192.us-west-2.compute.amazonaws.com:8020</value>
  </property>

  <!-- OOZIE proxy user setting -->
  <property>
    <name>hadoop.proxyuser.oozie.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.proxyuser.oozie.groups</name>
    <value>*</value>
  </property>

  <!-- HTTPFS proxy user setting -->
  <property>
    <name>hadoop.proxyuser.httpfs.hosts</name>
    <value>*</value>
  </property>
  <property>
    <name>hadoop.proxyuser.httpfs.groups</name>
    <value>*</value>
  </property>

</configuration>
{noformat}

hdfs-site.xml:
{noformat}
[ec2-user@ip-172-31-34-231 ~]$ cat /etc/hadoop/conf/hdfs-site.xml 
<?xml version="1.0"?>
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>2</value>
  </property>
  <!-- Immediately exit safemode as soon as one DataNode checks in. 
       On a multi-node cluster, these configurations must be removed.  -->
  <property>
    <name>dfs.safemode.extension</name>
    <value>0</value>
  </property>
  <property>
     <name>dfs.safemode.min.datanodes</name>
     <value>1</value>
  </property>
  <property>
     <name>hadoop.tmp.dir</name>
     <value>/local/data1/hadoop/hdfs/hadoop-hdfs/cache/${user.name}</value>
  </property>
  <property>
     <name>dfs.namenode.name.dir</name>
     
<value>file:///local/data0/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/name,file:///local/data1/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/name</value>
  </property>
  <property>
     <name>dfs.namenode.checkpoint.dir</name>
     
<value>file:///local/data0/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/namesecondary,file:///local/data1/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/namesecondary</value>
  </property>
  <property>
     <name>dfs.datanode.data.dir</name>
     
<value>file:///local/data0/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/data,file:///local/data1/hadoop/hdfs/hadoop-hdfs/cache/${user.name}/dfs/data</value>
  </property>
</configuration>
{noformat}

I did not touch other files.


> Cannot stop datanode through init script
> ----------------------------------------
>
>                 Key: BIGTOP-1129
>                 URL: https://issues.apache.org/jira/browse/BIGTOP-1129
>             Project: Bigtop
>          Issue Type: Bug
>          Components: Init scripts
>    Affects Versions: 0.7.0
>         Environment: Centos
>            Reporter: Bruno Mahé
>
> {noformat}sudo /etc/init.d/hadoop-hdfs-datanode stop{noformat}
> When starting the datanode, I do see a correct pid file in 
> /var/run/hadoop-hdfs/hadoop-hdfs-datanode.pid but whenever I call the stop 
> command, the pid file disappear while the process still exists.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to