The name node and data node are running normally, such as the following 
process. The file "hbase-site.xml" and other associated files are enclosed.
Thanks.

-------------------------------------------------------------------------------------------------------------------------------
[hadoop@hadoop2 conf]$ jps
11805 SecondaryNameNode
32314 Jps
11614 DataNode
507 NodeManager
385 ResourceManager
11379 NameNode
------------------------------------------------------------------------------------------------------------------------------------
[hadoop@hadoop2 hadoop-2.7.3]$ bin/hdfs dfsadmin -report
Configured Capacity: 154684043264 (144.06 GB)
Present Capacity: 133174730752 (124.03 GB)
DFS Remaining: 128144982016 (119.34 GB)
DFS Used: 5029748736 (4.68 GB)
DFS Used%: 3.78%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------

Live datanodes (1):

Name: 127.0.0.1:9866 (localhost)
Hostname: localhost
Decommission Status : Normal
Configured Capacity: 154684043264 (144.06 GB)
DFS Used: 5029748736 (4.68 GB)
Non DFS Used: 21509312512 (20.03 GB)
DFS Remaining: 128144982016 (119.34 GB)
DFS Used%: 3.25%
DFS Remaining%: 82.84%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Nov 15 13:17:01 CST 2016
..................................................... 
..............................................................................................



-----Original Message-----
From: Ted Yu [mailto:[email protected]] 
Sent: Tuesday, November 15, 2016 11:50 AM
To: [email protected]
Subject: Re: problem in launching HBase

2016-10-31 15:49:57,528 FATAL [localhost:16000.activeMasterManager]
master.HMaster: Failed to become active master
java.net.ConnectException: Call From hadoop2/127.0.0.1 to localhost:8020 failed 
on connection exception: java.net.ConnectException: Connection refused; For 
more details see:
http://wiki.apache.org/hadoop/ConnectionRefused
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
...
  at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264)
  at
org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986)
  at
org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970)
  at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525)
  at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971)

Was the namenode running fine on localhost ?

Can you pastebin the contents of hbase-site.xml ?

On Mon, Nov 14, 2016 at 7:40 PM, QI Congyun <[email protected]
> wrote:

> Dear Ted,
>
> I had learn the HBase quick-start cookbook although I have not yet 
> read all of the document, I have known how to configure the HBase 
> primary parameters and basic operation.
>
> I had ever tried to both make HBase control to use zookeeper and 
> cancel HBase to start zookeeper server via set up the export 
> HBASE_MANAGES_ZK=true/false in the file--hbase-env.sh. Whatever 
> zookeeper is launched by HBase automatically or started manually, the 
> same problems and logs are encountered as follows you submitted. I 
> don't understand why zookeeper authentication SASL failed.
>
> Actually when do the command "start-hbase.sh", the master process was 
> opened at the beginning, afterwards it's closed by itself, meanwhile 
> the zookeeper quorum process is always running until kill it manually. 
> I had do the command "JPS" to observe the process.
>
> Thanks.
>
>
> -----Original Message-----
> From: Ted Yu [mailto:[email protected]]
> Sent: Tuesday, November 15, 2016 11:01 AM
> To: [email protected]
> Subject: problem in launching HBase
>
> 2016-11-10 11:25:14,177 INFO  [main-SendThread(localhost:2181)]
> zookeeper.ClientCnxn: Opening socket connection to server localhost/
> *127.0.0.1*:2181. Will not attempt to           authenticate using SASL
> (unknown error)
>
> Was the zookeeper quorum running on the localhost ?
>
> In the future, use pastebin for passing config / log files - 
> attachment would be stripped by mailing list.
>
> Have you read this ?
>
> http://hbase.apache.org/book.html#quickstart_fully_distributed
>
> On Mon, Nov 14, 2016 at 6:26 PM, QI Congyun <[email protected].
> cn
> > wrote:
>
> >
> > My previous e-mail is attached, pls check if the relative traces are 
> > enough to investigate or not?
> > My node configuration are also enclosed.
> >
> > Thanks a lot.
> >
> >
> > ---------- Forwarded message ----------
> > From: QI Congyun <[email protected]>
> > To: "[email protected]" <[email protected]>
> > Cc:
> > Date: Mon, 14 Nov 2016 08:20:11 +0000
> > Subject: my questions about launching HBase
> >
> > Hi, Specialist,
> >
> >
> >
> > I try to set up a HBase database, but the HBase is always raised 
> > some errors. I had ever sent the e-mail to one of hbase mail-list, 
> > the e-mail is refused some times, then I try to submit my questions 
> > to the new mail-box, hope to receive your response.
> >
> > Thanks a lot.
> >
> >
> >
> > my hadoop version is Hadoop-2.7.3,
> >
> > my OS is: CEOS linux6.4.
> >
> >
> >
> >
> > ---------- Forwarded message ----------
> > From: QI Congyun <[email protected]>
> > To: "[email protected]" <[email protected]>
> > Cc:
> > Date: Fri, 11 Nov 2016 02:01:58 +0000
> > Subject: FW: my questions are always not resolved about hbase
> >
> >
> >
> > The E-mail can’t be sent to the destination e-mail box, resent it again.
> >
> >
> >
> > Thanks.
> >
> >
> >
> > *From:* QI Congyun
> > *Sent:* Thursday, November 10, 2016 11:53 AM
> > *To:* '[email protected]'
> > *Subject:* my questions are always not resolved about hbase
> >
> >
> >
> > Hello sir,
> >
> >
> >
> > So sorry to bather you, I’m interested in the Hadoop system, and 
> > attempted to use Hadoop and Hbase, but the Hbase issue can’t be 
> > resolved, could you help me? Thanks in advance.
> >
> >
> >
> > 1.       I’m very bewildered why the same issue is always encountered
> > when launching hbase each time, the raised information is attached 
> > as
> > follows:
> >
> >
> >
> > *[hadoop@hadoop2 hbase-1.2.3]$ *
> >
> > *[hadoop@hadoop2 hbase-1.2.3]$ bin/start-hbase.sh *
> >
> > *localhost: starting zookeeper, logging to 
> > /home/hadoop/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-hadoop2.
> > ou
> > t*
> >
> > *localhost: java.io.IOException: Unable to create data dir
> > /home/testuser/zookeeper*
> >
> > *localhost:      at
> > org.apache.hadoop.hbase.zookeeper.HQuorumPeer.writeMyID(HQuorumPeer.
> > ja
> > va:157)*
> >
> > *localhost:      at
> > org.apache.hadoop.hbase.zookeeper.HQuorumPeer.main(HQuorumPeer.java:
> > 70
> > )*
> >
> > *starting master, logging to
> > /home/hadoop/hbase-1.2.3/logs/hbase-hadoop-master-hadoop2.out*
> >
> > *starting regionserver, logging to
> > /home/hadoop/hbase-1.2.3/logs/hbase-hadoop-1-regionserver-hadoop2.ou
> > t*
> >
> > ……………
> >
> > *[hadoop@hadoop2 hbase-1.2.3]$ jps*
> >
> > *11805 SecondaryNameNode*
> >
> > *11614 DataNode*
> >
> > *507 NodeManager*
> >
> > *30687 HRegionServer*
> >
> > *385 ResourceManager*
> >
> > *11379 NameNode*
> >
> > *30899 Jps*
> >
> > *……………………..*
> >
> > *[hadoop@hadoop2 hbase-1.2.3]$ bin/stop-hbase.sh *
> >
> > *stopping hbasecat: /tmp/hbase-hadoop-master.pid: No such file or
> > directory*
> >
> >
> >
> > *localhost: no zookeeper to stop because no pid file
> > /tmp/hbase-hadoop-zookeeper.pid*
> >
> >
> >
> > 2.       When I check the logs, and a fatal errors are raised once again,
> > but I don’t know why.
> >
> >
> >
> > 2016-11-10 11:25:14,177 INFO  [main-SendThread(localhost:2181)]
> > zookeeper.ClientCnxn: Opening socket connection to server localhost/ 
> > 127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown
> > error)
> >
> > 2016-11-10 11:25:14,181 WARN  [main-SendThread(localhost:2181)]
> > zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, 
> > closing socket connection and attempting reconnect
> >
> > java.net.ConnectException: Connection refused
> >
> >               at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> > Method)
> >
> >               at sun.nio.ch.SocketChannelImpl.finishConnect(
> > SocketChannelImpl.java:739)
> >
> >               at 
> > org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> > ClientCnxnSocketNIO.java:361)
> >
> >               at org.apache.zookeeper.ClientCnxn$SendThread.run(
> > ClientCnxn.java:1081)
> >
> > 2016-11-10 11:25:14,291 INFO  [main-SendThread(localhost:2181)]
> > zookeeper.ClientCnxn: Opening socket connection to server 
> > localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate 
> > using SASL (unknown error)
> >
> > 2016-11-10 11:25:14,294 WARN  [main-SendThread(localhost:2181)]
> > zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, 
> > closing socket connection and attempting reconnect
> >
> > java.net.ConnectException: Connection refused
> >
> >
> >
> >
> >
> >
>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file://${hadoop.tmp.dir}/dfs/name</value>
    </property> 
    <property>
        <name>dfs.namenode.edits.dir</name>
        <value>file://${hadoop.tmp.dir}/dfs/name</value>
    </property>   
    <property>
        <name>dfs.namenode.handler.count </name>
        <value>100</value>
    </property>    
    <property>
        <name>dfs.blocksize</name>
        <value>268435456</value>
    </property>
    <property>
        <name>dfs.namenode.http-address</name>
        <value>0.0.0.0:9870</value>
    </property>      
    <property>
        <name>dfs.datanode.address</name>
        <value>0.0.0.0:9866</value>
    </property>    
    <property>
        <name>dfs.datanode.http.address</name>
        <value>0.0.0.0:9864</value>
    </property>     
     <property>
        <name>dfs.datanode.ipc.address</name>
        <value>0.0.0.0:9867</value>
    </property>    
    <property>
        <name>dfs.datanode.handler.count</name>
        <value>10</value>
    </property>     
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file://${hadoop.tmp.dir}/dfs/data</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir.perm</name>
        <value>700</value>
    </property>  

</configuration>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>0.0.0.0:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>0.0.0.0:19888</value>
    </property>
    <property>
        <name>mapreduce.admin.user.env</name>
        <value>HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME</value>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.env</name>
        <value>HADOOP_MAPRED_HOME=$HADOOP_COMMON_HOME</value>
    </property>
    <property>
        <name>mapreduce.task.io.sort.factor</name>
        <value>100</value>
    </property>
    <property>
        <name>mapreduce.reduce.shuffle.parallelcopies</name>
        <value>50</value>
    </property>
    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>1024</value>
    </property>
    <property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>2048</value>
    </property>

</configuration>
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>0.0.0.0</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>${yarn.resourcemanager.hostname}:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>${yarn.resourcemanager.hostname}:8031</value>
    </property>    
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>${yarn.resourcemanager.hostname}:8032</value>
    </property> 
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>${yarn.resourcemanager.hostname}:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>${yarn.resourcemanager.hostname}:8088</value>
    </property> 

    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>1024</value>
    </property> 
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>8192</value>
    </property> 
    
</configuration>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
		</property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/tmp/hadoop-${user.name}</value>
		</property>
		
</configuration>

Reply via email to